1
|
Baden T. The vertebrate retina: a window into the evolution of computation in the brain. Curr Opin Behav Sci 2024; 57:None. [PMID: 38899158 PMCID: PMC11183302 DOI: 10.1016/j.cobeha.2024.101391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 03/14/2024] [Accepted: 03/24/2024] [Indexed: 06/21/2024]
Abstract
Animal brains are probably the most complex computational machines on our planet, and like everything in biology, they are the product of evolution. Advances in developmental and palaeobiology have been expanding our general understanding of how nervous systems can change at a molecular and structural level. However, how these changes translate into altered function - that is, into 'computation' - remains comparatively sparsely explored. What, concretely, does it mean for neuronal computation when neurons change their morphology and connectivity, when new neurons appear or old ones disappear, or when transmitter systems are slowly modified over many generations? And how does evolution use these many possible knobs and dials to constantly tune computation to give rise to the amazing diversity in animal behaviours we see today? Addressing these major gaps of understanding benefits from choosing a suitable model system. Here, I present the vertebrate retina as one perhaps unusually promising candidate. The retina is ancient and displays highly conserved core organisational principles across the entire vertebrate lineage, alongside a myriad of adjustments across extant species that were shaped by the history of their visual ecology. Moreover, the computational logic of the retina is readily interrogated experimentally, and our existing understanding of retinal circuits in a handful of species can serve as an anchor when exploring the visual circuit adaptations across the entire vertebrate tree of life, from fish deep in the aphotic zone of the oceans to eagles soaring high up in the sky.
Collapse
|
2
|
Chen Q, Ingram NT, Baudin J, Angueyra JM, Sinha R, Rieke F. Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.20.563304. [PMID: 37961603 PMCID: PMC10634684 DOI: 10.1101/2023.10.20.563304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Computation in neural circuits relies on judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this hampers our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Norianne T. Ingram
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | | | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
3
|
Patterson SS, Girresch RJ, Mazzaferri MA, Bordt AS, Piñon-Teal WL, Jesse BD, Perera DCW, Schlepphorst MA, Kuchenbecker JA, Chuang AZ, Neitz J, Marshak DW, Ogilvie JM. Synaptic Origins of the Complex Receptive Field Structure in Primate Smooth Monostratified Retinal Ganglion Cells. eNeuro 2024; 11:ENEURO.0280-23.2023. [PMID: 38290840 PMCID: PMC11078106 DOI: 10.1523/eneuro.0280-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/21/2023] [Accepted: 12/04/2023] [Indexed: 02/01/2024] Open
Abstract
Considerable progress has been made in studying the receptive fields of the most common primate retinal ganglion cell (RGC) types, such as parasol RGCs. Much less is known about the rarer primate RGC types and the circuitry that gives rise to noncanonical receptive field structures. The goal of this study was to analyze synaptic inputs to smooth monostratified RGCs to determine the origins of their complex spatial receptive fields, which contain isolated regions of high sensitivity called "hotspots." Interestingly, smooth monostratified RGCs co-stratify with the well-studied parasol RGCs and are thus constrained to receiving input from bipolar and amacrine cells with processes sharing the same layer, raising the question of how their functional differences originate. Through 3D reconstructions of circuitry and synapses onto ON smooth monostratified and ON parasol RGCs from central macaque retina, we identified four distinct sampling strategies employed by smooth and parasol RGCs to extract diverse response properties from co-stratifying bipolar and amacrine cells. The two RGC types differed in the proportion of amacrine cell input, relative contributions of co-stratifying bipolar cell types, amount of synaptic input per bipolar cell, and spatial distribution of bipolar cell synapses. Our results indicate that the smooth RGC's complex receptive field structure arises through spatial asymmetries in excitatory bipolar cell input which formed several discrete clusters comparable with physiologically measured hotspots. Taken together, our results demonstrate how the striking differences between ON parasol and ON smooth monostratified RGCs arise from distinct strategies for sampling a common set of synaptic inputs.
Collapse
Affiliation(s)
- Sara S Patterson
- Center for Visual Science, University of Rochester, Rochester, NewYork 14617
| | - Rebecca J Girresch
- Department of Biology, Saint Louis University, Saint Louis, Missouri 63103
| | - Marcus A Mazzaferri
- Department of Ophthalmology, University of Washington, Seattle, Washington 98104
| | - Andrea S Bordt
- Department of Ophthalmology, University of Washington, Seattle, Washington 98104
- Departments of Ophthalmology & Visual Science, McGovern Medical School, Houston, Texas 77030
| | - Wendy L Piñon-Teal
- Department of Biology, Saint Louis University, Saint Louis, Missouri 63103
| | - Brett D Jesse
- Department of Biology, Saint Louis University, Saint Louis, Missouri 63103
| | | | | | - James A Kuchenbecker
- Department of Ophthalmology, University of Washington, Seattle, Washington 98104
| | - Alice Z Chuang
- Departments of Ophthalmology & Visual Science, McGovern Medical School, Houston, Texas 77030
| | - Jay Neitz
- Department of Ophthalmology, University of Washington, Seattle, Washington 98104
| | - David W Marshak
- Neurobiology and Anatomy, McGovern Medical School, Houston, Texas 77030
| | | |
Collapse
|
4
|
Huang PY, Jiang BY, Chen HJ, Xu JY, Wang K, Zhu CY, Hu XY, Li D, Zhen L, Zhou FC, Qin JK, Xu CY. Neuro-inspired optical sensor array for high-accuracy static image recognition and dynamic trace extraction. Nat Commun 2023; 14:6736. [PMID: 37872169 PMCID: PMC10593955 DOI: 10.1038/s41467-023-42488-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 10/12/2023] [Indexed: 10/25/2023] Open
Abstract
Neuro-inspired vision systems hold great promise to address the growing demands of mass data processing for edge computing, a distributed framework that brings computation and data storage closer to the sources of data. In addition to the capability of static image sensing and processing, the hardware implementation of a neuro-inspired vision system also requires the fulfilment of detecting and recognizing moving targets. Here, we demonstrated a neuro-inspired optical sensor based on two-dimensional NbS2/MoS2 hybrid films, which featured remarkable photo-induced conductance plasticity and low electrical energy consumption. A neuro-inspired optical sensor array with 10 × 10 NbS2/MoS2 phototransistors enabled highly integrated functions of sensing, memory, and contrast enhancement capabilities for static images, which benefits convolutional neural network (CNN) with a high image recognition accuracy. More importantly, in-sensor trajectory registration of moving light spots was experimentally implemented such that the post-processing could yield a high restoration accuracy. Our neuro-inspired optical sensor array could provide a fascinating platform for the implementation of high-performance artificial vision systems.
Collapse
Affiliation(s)
- Pei-Yu Huang
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Bi-Yi Jiang
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Hong-Ji Chen
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Jia-Yi Xu
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Kang Wang
- Key Laboratory of MEMS of the Ministry of Education, Southeast University, Nanjing, 210096, China
| | - Cheng-Yi Zhu
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Xin-Yan Hu
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Dong Li
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Liang Zhen
- MOE Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Harbin Institute of Technology, Harbin, 150080, China
| | - Fei-Chi Zhou
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China.
| | - Jing-Kai Qin
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| | - Cheng-Yan Xu
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
- MOE Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Harbin Institute of Technology, Harbin, 150080, China.
| |
Collapse
|
5
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
6
|
Gong Z, Zhou M, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for the visual processing of naturalistic scenes. Sci Data 2023; 10:559. [PMID: 37612327 PMCID: PMC10447576 DOI: 10.1038/s41597-023-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/14/2023] [Indexed: 08/25/2023] Open
Abstract
One ultimate goal of visual neuroscience is to understand how the brain processes visual stimuli encountered in the natural environment. Achieving this goal requires records of brain responses under massive amounts of naturalistic stimuli. Although the scientific community has put a lot of effort into collecting large-scale functional magnetic resonance imaging (fMRI) data under naturalistic stimuli, more naturalistic fMRI datasets are still urgently needed. We present here the Natural Object Dataset (NOD), a large-scale fMRI dataset containing responses to 57,120 naturalistic images from 30 participants. NOD strives for a balance between sampling variation between individuals and sampling variation between stimuli. This enables NOD to be utilized not only for determining whether an observation is generalizable across many individuals, but also for testing whether a response pattern is generalized to a variety of naturalistic stimuli. We anticipate that the NOD together with existing naturalistic neuroimaging datasets will serve as a new impetus for our understanding of the visual processing of naturalistic stimuli.
Collapse
Affiliation(s)
- Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
7
|
Baek S, Park Y, Paik SB. Species-specific wiring of cortical circuits for small-world networks in the primary visual cortex. PLoS Comput Biol 2023; 19:e1011343. [PMID: 37540638 PMCID: PMC10403141 DOI: 10.1371/journal.pcbi.1011343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 07/10/2023] [Indexed: 08/06/2023] Open
Abstract
Long-range horizontal connections (LRCs) are conspicuous anatomical structures in the primary visual cortex (V1) of mammals, yet their detailed functions in relation to visual processing are not fully understood. Here, we show that LRCs are key components to organize a "small-world network" optimized for each size of the visual cortex, enabling the cost-efficient integration of visual information. Using computational simulations of a biologically inspired model neural network, we found that sparse LRCs added to networks, combined with dense local connections, compose a small-world network and significantly enhance image classification performance. We confirmed that the performance of the network appeared to be strongly correlated with the small-world coefficient of the model network under various conditions. Our theoretical model demonstrates that the amount of LRCs to build a small-world network depends on each size of cortex and that LRCs are beneficial only when the size of the network exceeds a certain threshold. Our model simulation of various sizes of cortices validates this prediction and provides an explanation of the species-specific existence of LRCs in animal data. Our results provide insight into a biological strategy of the brain to balance functional performance and resource cost.
Collapse
Affiliation(s)
- Seungdae Baek
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Youngjin Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Se-Bum Paik
- Department of Brain and Cognitive Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
8
|
Malladi SPK, Mukherjee J, Larabi MC, Chaudhury S. Towards explainable deep visual saliency models. COMPUTER VISION AND IMAGE UNDERSTANDING 2023:103782. [DOI: 10.1016/j.cviu.2023.103782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
9
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. SpikeSEE: An energy-efficient dynamic scenes processing framework for retinal prostheses. Neural Netw 2023; 164:357-368. [PMID: 37167749 DOI: 10.1016/j.neunet.2023.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 04/29/2023] [Accepted: 05/01/2023] [Indexed: 05/13/2023]
Abstract
Intelligent and low-power retinal prostheses are highly demanded in this era, where wearable and implantable devices are used for numerous healthcare applications. In this paper, we propose an energy-efficient dynamic scenes processing framework (SpikeSEE) that combines a spike representation encoding technique and a bio-inspired spiking recurrent neural network (SRNN) model to achieve intelligent processing and extreme low-power computation for retinal prostheses. The spike representation encoding technique could interpret dynamic scenes with sparse spike trains, decreasing the data volume. The SRNN model, inspired by the human retina's special structure and spike processing method, is adopted to predict the response of ganglion cells to dynamic scenes. Experimental results show that the Pearson correlation coefficient of the proposed SRNN model achieves 0.93, which outperforms the state-of-the-art processing framework for retinal prostheses. Thanks to the spike representation and SRNN processing, the model can extract visual features in a multiplication-free fashion. The framework achieves 8 times power reduction compared with the convolutional recurrent neural network (CRNN) processing-based framework. Our proposed SpikeSEE predicts the response of ganglion cells more accurately with lower energy consumption, which alleviates the precision and power issues of retinal prostheses and provides a potential solution for wearable or implantable prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, 100850, China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| |
Collapse
|
10
|
Qiu Y, Klindt DA, Szatko KP, Gonschorek D, Hoefling L, Schubert T, Busse L, Bethge M, Euler T. Efficient coding of natural scenes improves neural system identification. PLoS Comput Biol 2023; 19:e1011037. [PMID: 37093861 PMCID: PMC10159360 DOI: 10.1371/journal.pcbi.1011037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the "stand-alone" system identification model, it also produced more biologically plausible filters, meaning that they more closely resembled neural representation in early visual systems. We found these results applied to retinal responses to different artificial stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. The benefit of natural scene statistics became marginal, however, for predicting the responses to natural movies. In summary, our results indicate that efficiently encoding environmental inputs can improve system identification models, at least for noise stimuli, and point to the benefit of probing the visual system with naturalistic stimuli.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
| | - David A Klindt
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Dominic Gonschorek
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Research Training Group 2381, U Tübingen, Tübingen, Germany
| | - Larissa Hoefling
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Timm Schubert
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Planegg-Martinsried, Germany
- Bernstein Center for Computational Neuroscience, Planegg-Martinsried, Germany
| | - Matthias Bethge
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Institute for Theoretical Physics, U Tübingen, Tübingen, Germany
| | - Thomas Euler
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| |
Collapse
|
11
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
12
|
Freedland J, Rieke F. Systematic reduction of the dimensionality of natural scenes allows accurate predictions of retinal ganglion cell spike outputs. Proc Natl Acad Sci U S A 2022; 119:e2121744119. [PMID: 36343230 PMCID: PMC9674269 DOI: 10.1073/pnas.2121744119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.
Collapse
Affiliation(s)
- Julian Freedland
- Molecular Engineering & Sciences Institute, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
13
|
In-sensor image memorization and encoding via optical neurons for bio-stimulus domain reduction toward visual cognitive processing. Nat Commun 2022; 13:5223. [PMID: 36064944 PMCID: PMC9445171 DOI: 10.1038/s41467-022-32790-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 08/15/2022] [Indexed: 12/03/2022] Open
Abstract
As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction. Designing in-sensor computing systems remains a challenge. Here, the authors demonstrate artificial optical neurons based on the in-sensor computing architecture that fuses sensory and computing nodes into a single platform capable of reducing data transfer time and energy for encoding and classification.
Collapse
|
14
|
Li Y, Wang T, Yang Y, Dai W, Wu Y, Li L, Han C, Zhong L, Li L, Wang G, Dou F, Xing D. Cascaded normalizations for spatial integration in the primary visual cortex of primates. Cell Rep 2022; 40:111221. [PMID: 35977486 DOI: 10.1016/j.celrep.2022.111221] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/19/2022] [Accepted: 07/25/2022] [Indexed: 11/03/2022] Open
Abstract
Spatial integration of visual information is an important function in the brain. However, neural computation for spatial integration in the visual cortex remains unclear. In this study, we recorded laminar responses in V1 of awake monkeys driven by visual stimuli with grating patches and annuli of different sizes. We find three important response properties related to spatial integration that are significantly different between input and output layers: neurons in output layers have stronger surround suppression, smaller receptive field (RF), and higher sensitivity to grating annuli partially covering their RFs. These interlaminar differences can be explained by a descriptive model composed of two global divisions (normalization) and a local subtraction. Our results suggest suppressions with cascaded normalizations (CNs) are essential for spatial integration and laminar processing in the visual cortex. Interestingly, the features of spatial integration in convolutional neural networks, especially in lower layers, are different from our findings in V1.
Collapse
Affiliation(s)
- Yang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; College of Life Sciences, Beijing Normal University, Beijing 100875, China
| | - Yi Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Weifeng Dai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yujie Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Lianfeng Li
- China Academy of Launch Vehicle Technology, Beijing 100076, China
| | - Chuanliang Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Lvyan Zhong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Liang Li
- Beijing Institute of Basic Medical Sciences, Beijing 100005, China
| | - Gang Wang
- Beijing Institute of Basic Medical Sciences, Beijing 100005, China
| | - Fei Dou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; College of Life Sciences, Beijing Normal University, Beijing 100875, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
15
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
16
|
Abstract
Retinal circuits transform the pixel representation of photoreceptors into the feature representations of ganglion cells, whose axons transmit these representations to the brain. Functional, morphological, and transcriptomic surveys have identified more than 40 retinal ganglion cell (RGC) types in mice. RGCs extract features of varying complexity; some simply signal local differences in brightness (i.e., luminance contrast), whereas others detect specific motion trajectories. To understand the retina, we need to know how retinal circuits give rise to the diverse RGC feature representations. A catalog of the RGC feature set, in turn, is fundamental to understanding visual processing in the brain. Anterograde tracing indicates that RGCs innervate more than 50 areas in the mouse brain. Current maps connecting RGC types to brain areas are rudimentary, as is our understanding of how retinal signals are transformed downstream to guide behavior. In this article, I review the feature selectivities of mouse RGCs, how they arise, and how they are utilized downstream. Not only is knowledge of the behavioral purpose of RGC signals critical for understanding the retinal contributions to vision; it can also guide us to the most relevant areas of visual feature space. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Daniel Kerschensteiner
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences; Department of Neuroscience; Department of Biomedical Engineering; and Hope Center for Neurological Disorders, Washington University School of Medicine, Saint Louis, Missouri, USA;
| |
Collapse
|
17
|
Liu JK, Karamanlis D, Gollisch T. Simple model for encoding natural images by retinal ganglion cells with nonlinear spatial integration. PLoS Comput Biol 2022; 18:e1009925. [PMID: 35259159 PMCID: PMC8932571 DOI: 10.1371/journal.pcbi.1009925] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/18/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023] Open
Abstract
A central goal in sensory neuroscience is to understand the neuronal signal processing involved in the encoding of natural stimuli. A critical step towards this goal is the development of successful computational encoding models. For ganglion cells in the vertebrate retina, the development of satisfactory models for responses to natural visual scenes is an ongoing challenge. Standard models typically apply linear integration of visual stimuli over space, yet many ganglion cells are known to show nonlinear spatial integration, in particular when stimulated with contrast-reversing gratings. We here study the influence of spatial nonlinearities in the encoding of natural images by ganglion cells, using multielectrode-array recordings from isolated salamander and mouse retinas. We assess how responses to natural images depend on first- and second-order statistics of spatial patterns inside the receptive field. This leads us to a simple extension of current standard ganglion cell models. We show that taking not only the weighted average of light intensity inside the receptive field into account but also its variance over space can partly account for nonlinear integration and substantially improve response predictions of responses to novel images. For salamander ganglion cells, we find that response predictions for cell classes with large receptive fields profit most from including spatial contrast information. Finally, we demonstrate how this model framework can be used to assess the spatial scale of nonlinear integration. Our results underscore that nonlinear spatial stimulus integration translates to stimulation with natural images. Furthermore, the introduced model framework provides a simple, yet powerful extension of standard models and may serve as a benchmark for the development of more detailed models of the nonlinear structure of receptive fields. For understanding how sensory systems operate in the natural environment, an important goal is to develop models that capture neuronal responses to natural stimuli. For retinal ganglion cells, which connect the eye to the brain, current standard models often fail to capture responses to natural visual scenes. This shortcoming is at least partly rooted in the fact that ganglion cells may combine visual signals over space in a nonlinear fashion. We here show that a simple model, which not only considers the average light intensity inside a cell’s receptive field but also the variance of light intensity over space, can partly account for these nonlinearities and thereby improve current standard models. This provides an easy-to-obtain benchmark for modeling ganglion cell responses to natural images.
Collapse
Affiliation(s)
- Jian K. Liu
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
- * E-mail:
| |
Collapse
|
18
|
Bowren J, Sanchez-Giraldo L, Schwartz O. Inference via sparse coding in a hierarchical vision model. J Vis 2022; 22:19. [PMID: 35212744 PMCID: PMC8883180 DOI: 10.1167/jov.22.2.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Sparse coding has been incorporated in models of the visual cortex for its computational advantages and connection to biology. But how the level of sparsity contributes to performance on visual tasks is not well understood. In this work, sparse coding has been integrated into an existing hierarchical V2 model (Hosoya & Hyvärinen, 2015), but replacing its independent component analysis (ICA) with an explicit sparse coding in which the degree of sparsity can be controlled. After training, the sparse coding basis functions with a higher degree of sparsity resembled qualitatively different structures, such as curves and corners. The contributions of the models were assessed with image classification tasks, specifically tasks associated with mid-level vision including figure–ground classification, texture classification, and angle prediction between two line stimuli. In addition, the models were assessed in comparison with a texture sensitivity measure that has been reported in V2 (Freeman et al., 2013) and a deleted-region inference task. The results from the experiments show that although sparse coding performed worse than ICA at classifying images, only sparse coding was able to better match the texture sensitivity level of V2 and infer deleted image regions, both by increasing the degree of sparsity in sparse coding. Greater degrees of sparsity allowed for inference over larger deleted image regions. The mechanism that allows for this inference capability in sparse coding is described in this article.
Collapse
Affiliation(s)
- Joshua Bowren
- Department of Computer Science, University of Miami, Coral Gables, FL, USA.,
| | - Luis Sanchez-Giraldo
- Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, USA.,
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, USA.,
| |
Collapse
|
19
|
Zhou B, Li Z, Kim S, Lafferty J, Clark DA. Shallow neural networks trained to detect collisions recover features of visual loom-selective neurons. eLife 2022; 11:72067. [PMID: 35023828 PMCID: PMC8849349 DOI: 10.7554/elife.72067] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/11/2022] [Indexed: 11/13/2022] Open
Abstract
Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.
Collapse
Affiliation(s)
- Baohua Zhou
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| | - Zifan Li
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Sunnie Kim
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - John Lafferty
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| |
Collapse
|
20
|
Sun ED, Dekel R. ImageNet-trained deep neural networks exhibit illusion-like response to the Scintillating grid. J Vis 2021; 21:15. [PMID: 34677575 PMCID: PMC8543405 DOI: 10.1167/jov.21.11.15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Deep neural network (DNN) models for computer vision are capable of human-level object recognition. Consequently, similarities between DNN and human vision are of interest. Here, we characterize DNN representations of Scintillating grid visual illusion images in which white disks are perceived to be partially black. Specifically, we use VGG-19 and ResNet-101 DNN models that were trained for image classification and consider the representational dissimilarity (\(L^1\) distance in the penultimate layer) between pairs of images: one with white Scintillating grid disks and the other with disks of decreasing luminance levels. Results showed a nonmonotonic relation, such that decreasing disk luminance led to an increase and subsequently a decrease in representational dissimilarity. That is, the Scintillating grid image with white disks was closer, in terms of the representation, to images with black disks than images with gray disks. In control nonillusion images, such nonmonotonicity was rare. These results suggest that nonmonotonicity in a deep computational representation is a potential test for illusion-like response geometry in DNN models.
Collapse
Affiliation(s)
- Eric D Sun
- Mather House, Harvard University, Cambridge, MA, USA.,
| | - Ron Dekel
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, PA, Israel.,
| |
Collapse
|
21
|
Turner MH, Clandinin TR. Neuroscience: Convergence of biological and artificial networks. Curr Biol 2021; 31:R1079-R1081. [PMID: 34582814 DOI: 10.1016/j.cub.2021.07.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A new study shows that an artificial neural network trained to predict visual motion reproduces key properties of motion detecting circuits in the fruit fly.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Neurobiology, Stanford University, Stanford, CA 94103, USA
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Stanford, CA 94103, USA.
| |
Collapse
|
22
|
Lindsay GW. Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future. J Cogn Neurosci 2021; 33:2017-2031. [DOI: 10.1162/jocn_a_01544] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abstract
Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight. Specifically, it covers the origins of CNNs and the methods by which we validate them as models of biological vision. It then goes on to elaborate on what we can learn about biological vision by understanding and experimenting on CNNs and discusses emerging opportunities for the use of CNNs in vision research beyond basic object recognition.
Collapse
|
23
|
Qiu Y, Zhao Z, Klindt D, Kautzky M, Szatko KP, Schaeffel F, Rifai K, Franke K, Busse L, Euler T. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr Biol 2021; 31:3233-3247.e6. [PMID: 34107304 DOI: 10.1016/j.cub.2021.05.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/06/2021] [Accepted: 05/11/2021] [Indexed: 12/29/2022]
Abstract
Pressures for survival make sensory circuits adapted to a species' natural habitat and its behavioral challenges. Thus, to advance our understanding of the visual system, it is essential to consider an animal's specific visual environment by capturing natural scenes, characterizing their statistical regularities, and using them to probe visual computations. Mice, a prominent visual system model, have salient visual specializations, being dichromatic with enhanced sensitivity to green and UV in the dorsal and ventral retina, respectively. However, the characteristics of their visual environment that likely have driven these adaptations are rarely considered. Here, we built a UV-green-sensitive camera to record footage from mouse habitats. This footage is publicly available as a resource for mouse vision research. We found chromatic contrast to greatly diverge in the upper, but not the lower, visual field. Moreover, training a convolutional autoencoder on upper, but not lower, visual field scenes was sufficient for the emergence of color-opponent filters, suggesting that this environmental difference might have driven superior chromatic opponency in the ventral mouse retina, supporting color discrimination in the upper visual field. Furthermore, the upper visual field was biased toward dark UV contrasts, paralleled by more light-offset-sensitive ganglion cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Zhijian Zhao
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany
| | - David Klindt
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Magdalena Kautzky
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152 Planegg-Martinsried, Germany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Frank Schaeffel
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Bernstein Centre for Computational Neuroscience, 82152 Planegg-Martinsried, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany.
| |
Collapse
|
24
|
Johnson KP, Fitzpatrick MJ, Zhao L, Wang B, McCracken S, Williams PR, Kerschensteiner D. Cell-type-specific binocular vision guides predation in mice. Neuron 2021; 109:1527-1539.e4. [PMID: 33784498 DOI: 10.1016/j.neuron.2021.03.010] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 02/09/2021] [Accepted: 03/05/2021] [Indexed: 12/20/2022]
Abstract
Predators use vision to hunt, and hunting success is one of evolution's main selection pressures. However, how viewing strategies and visual systems are adapted to predation is unclear. Tracking predator-prey interactions of mice and crickets in 3D, we find that mice trace crickets with their binocular visual fields and that monocular mice are poor hunters. Mammalian binocular vision requires ipsi- and contralateral projections of retinal ganglion cells (RGCs) to the brain. Large-scale single-cell recordings and morphological reconstructions reveal that only a small subset (9 of 40+) of RGC types in the ventrotemporal mouse retina innervate ipsilateral brain areas (ipsi-RGCs). Selective ablation of ipsi-RGCs (<2% of RGCs) in the adult retina drastically reduces the hunting success of mice. Stimuli based on ethological observations indicate that five ipsi-RGC types reliably signal prey. Thus, viewing strategies align with a spatially restricted and cell-type-specific set of ipsi-RGCs that supports binocular vision to guide predation.
Collapse
Affiliation(s)
- Keith P Johnson
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Michael J Fitzpatrick
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA; Medical Scientist Training Program, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Lei Zhao
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Bing Wang
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Sean McCracken
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Philip R Williams
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA; Hope Center for Neurological Disorders, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Daniel Kerschensteiner
- John F. Hardesty, MD Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA; Hope Center for Neurological Disorders, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO 63110, USA.
| |
Collapse
|
25
|
Bigge R, Pfefferle M, Pfeiffer K, Stöckl A. Natural image statistics in the dorsal and ventral visual field match a switch in flight behaviour of a hawkmoth. Curr Biol 2021; 31:R280-R281. [PMID: 33756136 DOI: 10.1016/j.cub.2021.02.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Many animals use visual cues to navigate their environment. To encode the large input ranges of natural signals optimally, their sensory systems have adapted to the stimulus statistics experienced in their natural habitats1. A striking example, shared across animal phyla, is the retinal tuning to the relative abundance of blue light from the sky, and green light from the ground, evident in the frequency of each photoreceptor type in the two retinal hemispheres2. By adhering only to specific regions of the visual field that contain the relevant information, as for the high-acuity dorsal regions in the eyes of male flies chasing females3, the neural investment can be further reduced. Regionalisation can even lead to activation of the appropriate visual pathway by target location, rather than by stimulus features. This has been shown in fruit flies, which increase their landing attempts when an expanding disc is presented in their frontal visual field, while lateral presentation increases obstacle avoidance responses4. We here report a similar switch in behavioural responses for extended visual scenes. Using a free-flight paradigm, we show that the hummingbird hawkmoth (Macroglossum stellatarum) responds with flight-control adjustments to translational optic-flow cues exclusively in their ventral and lateral visual fields, while identical stimuli presented dorsally elicit a novel directional flight response. This response split is predicted by our quantitative imaging data from natural visual scenes in a variety of habitats, which demonstrate higher magnitudes of translational optic flow in the ventral hemisphere, and the opposite distribution for contrast edges containing directional information.
Collapse
Affiliation(s)
- Ronja Bigge
- Chair of Zoology 2, Würzburg University, Am Hubland, 97074 Würzburg, Germany
| | | | - Keram Pfeiffer
- Chair of Zoology 2, Würzburg University, Am Hubland, 97074 Würzburg, Germany
| | - Anna Stöckl
- Chair of Zoology 2, Würzburg University, Am Hubland, 97074 Würzburg, Germany.
| |
Collapse
|
26
|
Huang T, Zhen Z, Liu J. Semantic Relatedness Emerges in Deep Convolutional Neural Networks Designed for Object Recognition. Front Comput Neurosci 2021; 15:625804. [PMID: 33692678 PMCID: PMC7938322 DOI: 10.3389/fncom.2021.625804] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 02/01/2021] [Indexed: 11/22/2022] Open
Abstract
Human not only can effortlessly recognize objects, but also characterize object categories into semantic concepts with a nested hierarchical structure. One dominant view is that top-down conceptual guidance is necessary to form such hierarchy. Here we challenged this idea by examining whether deep convolutional neural networks (DCNNs) could learn relations among objects purely based on bottom-up perceptual experience of objects through training for object categorization. Specifically, we explored representational similarity among objects in a typical DCNN (e.g., AlexNet), and found that representations of object categories were organized in a hierarchical fashion, suggesting that the relatedness among objects emerged automatically when learning to recognize them. Critically, the emerged relatedness of objects in the DCNN was highly similar to the WordNet in human, implying that top-down conceptual guidance may not be a prerequisite for human learning the relatedness among objects. In addition, the developmental trajectory of the relatedness among objects during training revealed that the hierarchical structure was constructed in a coarse-to-fine fashion, and evolved into maturity before the establishment of object recognition ability. Finally, the fineness of the relatedness was greatly shaped by the demand of tasks that the DCNN performed, as the higher superordinate level of object classification was, the coarser the hierarchical structure of the relatedness emerged. Taken together, our study provides the first empirical evidence that semantic relatedness of objects emerged as a by-product of object recognition in DCNNs, implying that human may acquire semantic knowledge on objects without explicit top-down conceptual guidance.
Collapse
Affiliation(s)
- Taicheng Huang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Jia Liu
- Department of Psychology, Tsinghua University, Beijing, China
| |
Collapse
|
27
|
A brain-inspired network architecture for cost-efficient object recognition in shallow hierarchical neural networks. Neural Netw 2020; 134:76-85. [PMID: 33291018 DOI: 10.1016/j.neunet.2020.11.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 09/11/2020] [Accepted: 11/24/2020] [Indexed: 11/20/2022]
Abstract
The brain successfully performs visual object recognition with a limited number of hierarchical networks that are much shallower than artificial deep neural networks (DNNs) that perform similar tasks. Here, we show that long-range horizontal connections (LRCs), often observed in the visual cortex of mammalian species, enable such a cost-efficient visual object recognition in shallow neural networks. Using simulations of a model hierarchical network with convergent feedforward connections and LRCs, we found that the addition of LRCs to the shallow feedforward network significantly enhances the performance of networks for image classification, to a degree that is comparable to much deeper networks. We found that a combination of sparse LRCs and dense local connections dramatically increases performance per wiring cost. From network pruning with gradient-based optimization, we also confirmed that LRCs could emerge spontaneously by minimizing the total connection length while maintaining performance. Ablation of emerged LRCs led to a significant reduction of classification performance, which implies these LRCs are crucial for performing image classification. Taken together, our findings suggest a brain-inspired strategy for constructing a cost-efficient network architecture to implement parsimonious object recognition under physical constraints such as shallow hierarchical depth.
Collapse
|
28
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
29
|
Soto F, Hsiang JC, Rajagopal R, Piggott K, Harocopos GJ, Couch SM, Custer P, Morgan JL, Kerschensteiner D. Efficient Coding by Midget and Parasol Ganglion Cells in the Human Retina. Neuron 2020; 107:656-666.e5. [PMID: 32533915 DOI: 10.1016/j.neuron.2020.05.030] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 04/24/2020] [Accepted: 05/20/2020] [Indexed: 01/03/2023]
Abstract
In humans, midget and parasol ganglion cells account for most of the input from the eyes to the brain. Yet, how they encode visual information is unknown. Here, we perform large-scale multi-electrode array recordings from retinas of treatment-naive patients who underwent enucleation surgery for choroidal malignant melanomas. We identify robust differences in the function of midget and parasol ganglion cells, consistent asymmetries between their ON and OFF types (that signal light increments and decrements, respectively) and divergence in the function of human versus non-human primate retinas. Our computational analyses reveal that the receptive fields of human midget and parasol ganglion cells divide naturalistic movies into adjacent spatiotemporal frequency domains with equal stimulus power, while the asymmetric response functions of their ON and OFF types simultaneously maximize stimulus coverage and information transmission and minimize metabolic cost. Thus, midget and parasol ganglion cells in the human retina efficiently encode our visual environment.
Collapse
Affiliation(s)
- Florentina Soto
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Jen-Chun Hsiang
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA; Graduate Program in Neuroscience, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Rithwick Rajagopal
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Kisha Piggott
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - George J Harocopos
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Steven M Couch
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Philip Custer
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Josh L Morgan
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA
| | - Daniel Kerschensteiner
- John F. Hardesty, MD, Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint Louis, MO 63110, USA; Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University School of Medicine, Saint Louis, MO 63110, USA; Hope Center for Neurological Disorders, Washington University School of Medicine, Saint Louis, MO 63110, USA.
| |
Collapse
|
30
|
Baden T, Euler T, Berens P. Understanding the retinal basis of vision across species. Nat Rev Neurosci 2019; 21:5-20. [PMID: 31780820 DOI: 10.1038/s41583-019-0242-1] [Citation(s) in RCA: 143] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2019] [Indexed: 12/12/2022]
Abstract
The vertebrate retina first evolved some 500 million years ago in ancestral marine chordates. Since then, the eyes of different species have been tuned to best support their unique visuoecological lifestyles. Visual specializations in eye designs, large-scale inhomogeneities across the retinal surface and local circuit motifs mean that all species' retinas are unique. Computational theories, such as the efficient coding hypothesis, have come a long way towards an explanation of the basic features of retinal organization and function; however, they cannot explain the full extent of retinal diversity within and across species. To build a truly general understanding of vertebrate vision and the retina's computational purpose, it is therefore important to more quantitatively relate different species' retinal functions to their specific natural environments and behavioural requirements. Ultimately, the goal of such efforts should be to build up to a more general theory of vision.
Collapse
Affiliation(s)
- Tom Baden
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, UK. .,Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany.,Institute for Bioinformatics and Medical Informatics, University of Tübingen, Tübingen, Germany.,Bernstein Centre for Computational Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
31
|
Zhou J, Benson NC, Kay K, Winawer J. Predicting neuronal dynamics with a delayed gain control model. PLoS Comput Biol 2019; 15:e1007484. [PMID: 31747389 PMCID: PMC6892546 DOI: 10.1371/journal.pcbi.1007484] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 12/04/2019] [Accepted: 10/10/2019] [Indexed: 11/19/2022] Open
Abstract
Visual neurons respond to static images with specific dynamics: neuronal responses sum sub-additively over time, reduce in amplitude with repeated or sustained stimuli (neuronal adaptation), and are slower at low stimulus contrast. Here, we propose a simple model that predicts these seemingly disparate response patterns observed in a diverse set of measurements-intracranial electrodes in patients, fMRI, and macaque single unit spiking. The model takes a time-varying contrast time course of a stimulus as input, and produces predicted neuronal dynamics as output. Model computation consists of linear filtering, expansive exponentiation, and a divisive gain control. The gain control signal relates to but is slower than the linear signal, and this delay is critical in giving rise to predictions matched to the observed dynamics. Our model is simpler than previously proposed related models, and fitting the model to intracranial EEG data uncovers two regularities across human visual field maps: estimated linear filters (temporal receptive fields) systematically differ across and within visual field maps, and later areas exhibit more rapid and substantial gain control. The model is further generalizable to account for dynamics of contrast-dependent spike rates in macaque V1, and amplitudes of fMRI BOLD in human V1.
Collapse
Affiliation(s)
- Jingyang Zhou
- Department of Psychology, New York University, New York City, New York, United States of America
| | - Noah C. Benson
- Department of Psychology, New York University, New York City, New York, United States of America
| | - Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Twin Cities, Minnesota, United States of America
| | - Jonathan Winawer
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
- Stanford Human Intracranial Cognitive Electrophysiology Program (SHICEP), Palo Alto, California, United States of America
| |
Collapse
|
32
|
Beyeler M, Rounds EL, Carlson KD, Dutt N, Krichmar JL. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput Biol 2019; 15:e1006908. [PMID: 31246948 PMCID: PMC6597036 DOI: 10.1371/journal.pcbi.1006908] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Psychology, University of Washington, Seattle, Washington, United States of America
- Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- eScience Institute, University of Washington, Seattle, Washington, United States of America
- Department of Computer Science, University of California, Irvine, California, United States of America
| | - Emily L. Rounds
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Kristofor D. Carlson
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
- Sandia National Laboratories, Albuquerque, New Mexico, United States of America
| | - Nikil Dutt
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Jeffrey L. Krichmar
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| |
Collapse
|