1
|
Wei H, Yao W. Computational Modeling of Ganglion Cell Bicolor Opponent Receptive Fields and FPGA Adaptation for Parallel Arrays. Biomimetics (Basel) 2024; 9:526. [PMID: 39329548 PMCID: PMC11430245 DOI: 10.3390/biomimetics9090526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/10/2024] [Accepted: 08/22/2024] [Indexed: 09/28/2024] Open
Abstract
The biological system is not a perfect system, but it is a relatively complete system. It is difficult to realize the lower power consumption and high parallelism that characterize biological systems if lower-level information pathways are ignored. In this paper, we focus on the K, M and P pathways of visual signal processing from the retina to the lateral geniculate nucleus (LGN). We model the visual system at a fine-grained level to ensure efficient information transmission while minimizing energy use. We also implement a circuit-level distributed parallel computing model on FPGAs. The results show that we are able to transfer information with low energy consumption and high parallelism. The Artix-7 family of xc7a200tsbv484-1 FPGAs can reach a maximum frequency of 200 MHz and a maximum parallelism of 600, and a single receptive field model consumes only 0.142 W of power. This can be useful for building assistive vision systems for small and light devices.
Collapse
Affiliation(s)
- Hui Wei
- Laboratory of Algorithms for Cognitive Models, School of Computer Science, Fudan University, Shanghai 200438, China;
| | | |
Collapse
|
2
|
Yang Z, Wang T, Lin Y, Chen Y, Zeng H, Pei J, Wang J, Liu X, Zhou Y, Zhang J, Wang X, Lv X, Zhao R, Shi L. A vision chip with complementary pathways for open-world sensing. Nature 2024; 629:1027-1033. [PMID: 38811710 DOI: 10.1038/s41586-024-07358-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 03/26/2024] [Indexed: 05/31/2024]
Abstract
Image sensors face substantial challenges when dealing with dynamic, diverse and unpredictable scenes in open-world applications. However, the development of image sensors towards high speed, high resolution, large dynamic range and high precision is limited by power and bandwidth. Here we present a complementary sensing paradigm inspired by the human visual system that involves parsing visual information into primitive-based representations and assembling these primitives to form two complementary vision pathways: a cognition-oriented pathway for accurate cognition and an action-oriented pathway for rapid response. To realize this paradigm, a vision chip called Tianmouc is developed, incorporating a hybrid pixel array and a parallel-and-heterogeneous readout architecture. Leveraging the characteristics of the complementary vision pathway, Tianmouc achieves high-speed sensing of up to 10,000 fps, a dynamic range of 130 dB and an advanced figure of merit in terms of spatial resolution, speed and dynamic range. Furthermore, it adaptively reduces bandwidth by 90%. We demonstrate the integration of a Tianmouc chip into an autonomous driving system, showcasing its abilities to enable accurate, fast and robust perception, even in challenging corner cases on open roads. The primitive-based complementary sensing paradigm helps in overcoming fundamental limitations in developing vision systems for diverse open-world applications.
Collapse
Affiliation(s)
- Zheyu Yang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
- Lynxi Technologies, Beijing, China
| | - Taoyi Wang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Yihan Lin
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Yuguo Chen
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Hui Zeng
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Jing Pei
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Jiazheng Wang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China
| | | | | | - Xin Wang
- Lynxi Technologies, Beijing, China
| | | | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center and Department of Precision Instrument, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
- THU-CET HIK Joint Research Center for Brain-Inspired Computing, Tsinghua University, Beijing, China.
| |
Collapse
|
3
|
Philip P, Jainwal K, van Schaik A, Thakur CS. Tau-Cell-Based Analog Silicon Retina With Spatio- Temporal Filtering and Contrast Gain Control. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:423-437. [PMID: 37956014 DOI: 10.1109/tbcas.2023.3332117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Developing precise artificial retinas is crucial because they hold the potential to restore vision, improve visual prosthetics, and enhance computer vision systems. Emulating the luminance and contrast adaption features of the retina is essential to improve visual perception and efficiency to provide an environment realistic representation to the user. In this article, we introduce an artificial retina model that leverages its potent adaptation to luminance and contrast to enhance vision sensing and information processing. The model has the ability to achieve the realization of both tonic and phasic cells in the simplest manner. We have implemented the retina model using 0.18 μm process technology and validated the accuracy of the hardware implementation through circuit simulation that closely matches the software retina model. Additionally, we have characterized a single pixel fabricated using the same 0.18 μm process. This pixel demonstrates an 87.7-% ratio of variance with the temporal software model and operates with a power consumption of 369 nW.
Collapse
|
4
|
Ghanbarpour M, Haghiri S, Hazzazi F, Assaad M, Chaudhary MA, Ahmadi A. Investigation on Vision System: Digital FPGA Implementation in Case of Retina Rod Cells. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:299-307. [PMID: 37824307 DOI: 10.1109/tbcas.2023.3323324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The development of prostheses and treatments for illnesses and recovery has recently been centered on hardware modeling for various delicate biological components, including the nervous system, brain, eyes, and heart. The retina, being the thinnest and deepest layer of the eye, is of particular interest. In this study, we employ the Nyquist-Based Approximation of Retina Rod Cell (NBAoRRC) approach, which has been adapted to utilize Look-Up Tables (LUTs) rather than original functions, to implement rod cells in the retina using cost-effective hardware. In modern mathematical models, numerous nonlinear functions are used to represent the activity of these cells. However, these nonlinear functions would require a substantial amount of hardware for direct implementation and may not meet the required speed constraints. The proposed method eliminates the need for multiplication functions and utilizes a fast, cost-effective rod cell device. Simulation results demonstrate the extent to which the proposed model aligns with the behavior of the primary rod cell model, particularly in terms of dynamic behavior. Based on the results of hardware implementation using the Field-Programmable Gate Arrays (FPGA) board Virtex-5, the proposed model is shown to be reliable, consume 30 percent less power than the primary model, and have reduced hardware resource requirements. Based on the results of hardware implementation using the reconfigurable FPGA board Virtex-5, the proposed model is reliable, uses 30% less power consumption than the primary model in the worth state of the set of approximation method, and has a reduced hardware resource requirement. In fact, using the proposed model, this reduction in the power consumption can be achieved. Finally, in this article, by using the LUT which is systematically sampled (Nyquist rate), we were able to remove all costly operators in terms of hardware (digital) realization and achieve very good results in the field of digital implementation in two scales of network and single neuron.
Collapse
|
5
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
6
|
Wang H, Xu J, Gao Z, Lu C, Yao S, Ma J. An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations. Front Neurosci 2016; 10:498. [PMID: 27867346 PMCID: PMC5095131 DOI: 10.3389/fnins.2016.00498] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 10/19/2016] [Indexed: 11/24/2022] Open
Abstract
A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increased owing to the use of both ON and OFF events. AER data acquired by a dynamic vision senses (DVS) are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition. The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation.
Collapse
Affiliation(s)
- Hanyu Wang
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| | - Jiangtao Xu
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| | - Zhiyuan Gao
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| | - Chengye Lu
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| | - Suying Yao
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| | - Jianguo Ma
- School of Electronic Information Engineering, Tianjin University Tianjin, China
| |
Collapse
|
7
|
Martínez-Cañada P, Morillas C, Pino B, Ros E, Pelayo F. A Computational Framework for Realistic Retina Modeling. Int J Neural Syst 2016; 26:1650030. [DOI: 10.1142/s0129065716500301] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.
Collapse
Affiliation(s)
- Pablo Martínez-Cañada
- Department of Computer Architecture and Technology, CITIC-UGR, University of Granada, Spain
| | - Christian Morillas
- Department of Computer Architecture and Technology, CITIC-UGR, University of Granada, Spain
| | - Begoña Pino
- Department of Computer Architecture and Technology, CITIC-UGR, University of Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, CITIC-UGR, University of Granada, Spain
| | - Francisco Pelayo
- Department of Computer Architecture and Technology, CITIC-UGR, University of Granada, Spain
| |
Collapse
|
8
|
Vanarse A, Osseiran A, Rassau A. A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors. Front Neurosci 2016; 10:115. [PMID: 27065784 PMCID: PMC4809886 DOI: 10.3389/fnins.2016.00115] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 03/07/2016] [Indexed: 11/19/2022] Open
Abstract
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
Collapse
Affiliation(s)
- Anup Vanarse
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| | - Adam Osseiran
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| | - Alexander Rassau
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| |
Collapse
|
9
|
Orchard G, Meyer C, Etienne-Cummings R, Posch C, Thakor N, Benosman R. HFirst: A Temporal Approach to Object Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2028-2040. [PMID: 26353184 DOI: 10.1109/tpami.2015.2392947] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.
Collapse
|
10
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 159] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
11
|
Okuno H, Hasegawa J, Sanada T, Yagi T. Real-time emulator for reproducing graded potentials in vertebrate retina. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2015; 9:284-295. [PMID: 25134087 DOI: 10.1109/tbcas.2014.2327103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In most parts of the retina, neuronal circuits process visual signals represented by slowly changing membrane potentials, or so-called graded potentials. A feasible approach to speculate about the functional roles of retinal neuronal circuits is to reproduce the graded potentials of retinal neurons in response to natural scenes. In this study, we developed a simulation platform for reproducing graded potentials with the following features: real-time reproduction of retinal neural activities in response to natural scenes, a configurable model structure, and compact hardware. The spatio-temporal properties of neurons were emulated efficiently by a mixed analog-digital architecture that consisted of analog resistive networks and a field-programmable gate array. The neural activities on sustained and transient pathways were emulated from 128 × 128 inputs at 200 frames per second.
Collapse
|
12
|
|
13
|
|
14
|
Pérez-Carrasco JA, Zhao B, Serrano C, Acha B, Serrano-Gotarredona T, Chen S, Linares-Barranco B. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:2706-2719. [PMID: 24051730 DOI: 10.1109/tpami.2013.71] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.
Collapse
|
15
|
Okuno H, Yagi T. Image sensor system with bio-inspired efficient coding and adaptation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:375-384. [PMID: 23853182 DOI: 10.1109/tbcas.2012.2185048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.
Collapse
|
16
|
Farabet C, Paz R, Pérez-Carrasco J, Zamarreño-Ramos C, Linares-Barranco A, Lecun Y, Culurciello E, Serrano-Gotarredona T, Linares-Barranco B. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing. Front Neurosci 2012; 6:32. [PMID: 22518097 PMCID: PMC3324817 DOI: 10.3389/fnins.2012.00032] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2011] [Accepted: 02/21/2012] [Indexed: 11/22/2022] Open
Abstract
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.
Collapse
Affiliation(s)
- Clément Farabet
- Computer Science Department, Courant Institute of Mathematical Sciences, New York University New York, NY, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
17
|
|
18
|
Bichler O, Querlioz D, Thorpe SJ, Bourgoin JP, Gamrat C. Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity. Neural Netw 2012; 32:339-48. [PMID: 22386501 DOI: 10.1016/j.neunet.2012.02.022] [Citation(s) in RCA: 108] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 01/13/2012] [Accepted: 02/07/2012] [Indexed: 11/25/2022]
Abstract
A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated from the retina in response to relative changes in illumination at the pixel level and transmitted to a feed-forward spiking neural network. Neurons become sensitive to patterns of pixels with correlated activation times, in a fully unsupervised scheme. This is achieved using a special form of Spike-Timing-Dependent Plasticity which depresses synapses that did not recently contribute to the post-synaptic spike activation, regardless of their activation time. Competitive learning is implemented with lateral inhibition. When tested with real-life data, the system is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway, after only 10 min of traffic learning. Complete trajectories can be learned with a 98% detection rate using a second layer, still with unsupervised learning, and the system may be used as a car counter. The proposed neural network is extremely robust to noise and it can tolerate a high degree of synaptic and neuronal variability with little impact on performance. Such results show that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience.
Collapse
Affiliation(s)
- Olivier Bichler
- CEA, LIST, Embedded Computing Laboratory, 91191 Gif-sur-Yvette Cedex, France.
| | | | | | | | | |
Collapse
|
19
|
Shimonomura K, Kameda S, Iwata A, Yagi T. Wide-dynamic-range APS-based silicon retina with brightness constancy. ACTA ACUST UNITED AC 2011; 22:1482-93. [PMID: 21803687 DOI: 10.1109/tnn.2011.2161591] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A silicon retina is an intelligent vision sensor that can execute real-time image preprocessing by using a parallel analog circuit that mimics the structure of the neuronal circuits in the vertebrate retina. For enhancing the sensor's robustness to changes in illumination in a practical environment, we have designed and fabricated a silicon retina on the basis of a computational model of brightness constancy. The chip has a wide-dynamic-range and shows a constant response against changes in the illumination intensity. The photosensor in the present chip approximates logarithmic illumination-to-voltage transfer characteristics as a result of the application of a time-modulated reset voltage technique. Two types of image processing, namely, Laplacian-Gaussian-like spatial filtering and computing the frame difference, are carried out by using resistive networks and sample/hold circuits in the chip. As a result of these processings, the chip exhibits brightness constancy over a wide range of illumination. The chip is fabricated by using the 0.25- μm complementary metal-oxide semiconductor image sensor technology. The number of pixels is 64 × 64, and the power consumption is 32 mW at the frame rate of 30 fps. We show that our chip not only has a wide-dynamic-range but also shows a constant response to the changes in illumination.
Collapse
|
20
|
Zamarreño-Ramos C, Camuñas-Mesa LA, Pérez-Carrasco JA, Masquelier T, Serrano-Gotarredona T, Linares-Barranco B. On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex. Front Neurosci 2011; 5:26. [PMID: 21442012 PMCID: PMC3062969 DOI: 10.3389/fnins.2011.00026] [Citation(s) in RCA: 118] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2010] [Accepted: 02/19/2011] [Indexed: 11/13/2022] Open
Abstract
In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site.
Collapse
Affiliation(s)
- Carlos Zamarreño-Ramos
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Luis A. Camuñas-Mesa
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Jose A. Pérez-Carrasco
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | | | | | | |
Collapse
|
21
|
Abstract
We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.
| | | |
Collapse
|
22
|
|
23
|
|
24
|
Perez-Carrasco JA, Acha B, Serrano C, Camunas-Mesa L, Serrano-Gotarredona T, Linares-Barranco B. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition. ACTA ACUST UNITED AC 2010; 21:609-20. [PMID: 20181543 DOI: 10.1109/tnn.2009.2039943] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.
Collapse
|
25
|
Serrano-Gotarredona R, Oster M, Lichtsteiner P, Linares-Barranco A, Paz-Vicente R, Gomez-Rodriguez F, Camunas-Mesa L, Berner R, Rivas-Perez M, Delbruck T, Liu SC, Douglas R, Hafliger P, Jimenez-Moreno G, Civit Ballcels A, Serrano-Gotarredona T, Acosta-Jimenez AJ, Linares-Barranco B. CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking. ACTA ACUST UNITED AC 2009; 20:1417-38. [PMID: 19635693 DOI: 10.1109/tnn.2009.2023653] [Citation(s) in RCA: 265] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
26
|
Barranco F, Díaz J, Ros E, del Pino B. Visual system based on artificial retina for motion detection. ACTA ACUST UNITED AC 2009; 39:752-62. [PMID: 19362896 DOI: 10.1109/tsmcb.2008.2009067] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We present a bioinspired model for detecting spatiotemporal features based on artificial retina response models. Event-driven processing is implemented using four kinds of cells encoding image contrast and temporal information. We have evaluated how the accuracy of motion processing depends on local contrast by using a multiscale and rank-order coding scheme to select the most important cues from retinal inputs. We have also developed some alternatives by integrating temporal feature results and obtained a new improved bioinspired matching algorithm with high stability, low error and low cost. Finally, we define a dynamic and versatile multimodal attention operator with which the system is driven to focus on different target features such as motion, colors, and textures.
Collapse
Affiliation(s)
- Francisco Barranco
- Department of Computer Architecture and Technology, University of Granada, 18071 Granada, Spain.
| | | | | | | |
Collapse
|
27
|
Serrano-Gotarredona R, Serrano-Gotarredona T, Acosta-Jimenez A, Serrano-Gotarredona C, Perez-Carrasco J, Linares-Barranco B, Linares-Barranco A, Jimenez-Moreno G, Civit-Ballcels A. On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/tnn.2008.2000163] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
28
|
Elder JB, Hoh DJ, Oh BC, Heller AC, Liu CY, Apuzzo ML. THE FUTURE OF CEREBRAL SURGERY. Neurosurgery 2008; 62:1555-79; discussion 1579-82. [DOI: 10.1227/01.neu.0000333820.33143.0d] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
29
|
|
30
|
Linazasoro G. Potential applications of nanotechnologies to Parkinson's disease therapy. Parkinsonism Relat Disord 2008; 14:383-92. [PMID: 18329315 DOI: 10.1016/j.parkreldis.2007.11.012] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2007] [Revised: 11/10/2007] [Accepted: 11/12/2007] [Indexed: 11/19/2022]
Abstract
Nanotechnology will play a key role in developing new diagnostic and therapeutic tools. Nanotechnologies use engineered materials with the smallest functional organization on the nanometre scale in at least one dimension. Some aspects of the material can be manipulated resulting in new functional properties. Nanotechnology could provide devices to limit and reverse neuropathological disease states, to support and promote functional regeneration of damaged neurons, to provide neuroprotection and to facilitate the delivery of drugs and small molecules across the blood-brain barrier. All of them are relevant to improve current therapy of Parkinson's disease (PD).
Collapse
Affiliation(s)
- G Linazasoro
- Centro de Investigación Parkinson, Policlínica Gipuzkoa, Parque Tecnológico de Miramón, 174, 20009 San Sebastián (Guipúzcoa), Spain.
| |
Collapse
|
31
|
Elder JB, Liu CY, Apuzzo ML. NEUROSURGERY IN THE REALM OF 10−9, PART 2. Neurosurgery 2008; 62:269-84; discussion 284-5. [DOI: 10.1227/01.neu.0000315995.73269.c3] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- James B. Elder
- Department of Neurological Surgery, University of Southern California, Keck School of Medicine, Los Angeles, California
| | - Charles Y. Liu
- Department of Neurological Surgery, University of Southern California, Keck School of Medicine, Los Angeles, California
- Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California
| | - Michael L.J. Apuzzo
- Department of Neurological Surgery, University of Southern California, Keck School of Medicine, Los Angeles, California
| |
Collapse
|
32
|
|
33
|
Vogelstein RJ, Mallik U, Culurciello E, Cauwenberghs G, Etienne-Cummings R. A multichip neuromorphic system for spike-based visual information processing. Neural Comput 2007; 19:2281-300. [PMID: 17650061 DOI: 10.1162/neco.2007.19.9.2281] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a multichip, mixed-signal VLSI system for spike-based vision processing. The system consists of an 80 x 60 pixel neuromorphic retina and a 4800 neuron silicon cortex with 4,194,304 synapses. Its functionality is illustrated with experimental data on multiple components of an attention-based hierarchical model of cortical object recognition, including feature coding, salience detection, and foveation. This model exploits arbitrary and reconfigurable connectivity between cells in the multichip architecture, achieved by asynchronously routing neural spike events within and between chips according to a memory-based look-up table. Synaptic parameters, including conductance and reversal potential, are also stored in memory and are used to dynamically configure synapse circuits within the silicon neurons.
Collapse
Affiliation(s)
- R Jacob Vogelstein
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|
34
|
Costas-Santos J, Serrano-Gotarredona T, Serrano-Gotarredona R, Linares-Barranco B. A Spatial Contrast Retina With On-Chip Calibration for Neuromorphic Spike-Based AER Vision Systems. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/tcsi.2007.900179] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
35
|
Serrano-Gotarredona R, Serrano-Gotarredona T, Acosta-Jimenez A, Linares-Barranco B. A Neuromorphic Cortical-Layer Microchip for Spike-Based Event Processing Vision Systems. ACTA ACUST UNITED AC 2006. [DOI: 10.1109/tcsi.2006.883843] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
36
|
Cosp J, Madrenas J, Fernández D. Design and basic blocks of a neuromorphic VLSI analogue vision system. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.09.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
37
|
Abstract
Prosthetic devices may someday be used to treat lesions of the central nervous system. Similar to neural circuits, these prosthetic devices should adapt their properties over time, independent of external control. Here we describe an artificial retina, constructed in silicon using single-transistor synaptic primitives, with two forms of locally controlled adaptation: luminance adaptation and contrast gain control. Both forms of adaptation rely on local modulation of synaptic strength, thus meeting the criteria of internal control. Our device is the first to reproduce the responses of the four major ganglion cell types that drive visual cortex, producing 3600 spiking outputs in total. We demonstrate how the responses of our device's ganglion cells compare to those measured from the mammalian retina. Replicating the retina's synaptic organization in our chip made it possible to perform these computations using a hundred times less energy than a microprocessor-and to match the mammalian retina in size and weight. With this level of efficiency and autonomy, it is now possible to develop fully implantable intraocular prostheses.
Collapse
Affiliation(s)
- Kareem A Zaghloul
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | |
Collapse
|
38
|
Thiel A, Greschner M, Ammermüller J. The temporal structure of transient ON/OFF ganglion cell responses and its relation to intra-retinal processing. J Comput Neurosci 2006; 21:131-51. [PMID: 16732489 DOI: 10.1007/s10827-006-7863-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2005] [Revised: 02/15/2006] [Accepted: 02/22/2006] [Indexed: 11/27/2022]
Abstract
A subpopulation of transient ON/OFF ganglion cells in the turtle retina transmits changes in stimulus intensity as series of distinct spike events. The temporal structure of these event sequences depends systematically on the stimulus and thus carries information about the preceding intensity change. To study the spike events' intra-retinal origins, we performed extracellular ganglion cell recordings and simultaneous intracellular recordings from horizontal and amacrine cells. Based on these data, we developed a computational retina model, reproducing spike event patterns with realistic intensity dependence under various experimental conditions. The model's main features are negative feedback from sustained amacrine onto bipolar cells, and a two-step cascade of ganglion cell suppression via a slow and a fast transient amacrine cell. Pharmacologically blocking glycinergic transmission results in disappearance of the spike event sequence, an effect predicted by the model if a single connection, namely suppression of the fast by the slow transient amacrine cell, is weakened. We suggest that the slow transient amacrine cell is glycinergic, whereas the other types release GABA. Thus, the interplay of amacrine cell mediated inhibition is likely to induce distinct temporal structure in ganglion cell responses, forming the basis for a temporal code.
Collapse
Affiliation(s)
- Andreas Thiel
- Neurobiology, Carl von Ossietzky University Oldenburg, Oldenburg, Germany.
| | | | | |
Collapse
|
39
|
Abstract
Retinal prostheses represent the best near-term hope for individuals with incurable, blinding diseases of the outer retina. On the basis of the electrical activation of nerves, prototype retinal prostheses have been tested in blind humans and have demonstrated the capability to elicit the sensation of light and to give test subjects the ability to detect motion. To improve the visual function in implant recipients, a more sophisticated device is required. Simulations suggest that 600-1000 pixels will be required to provide visual function such as face recognition and reading. State-of-the-art implantable stimulator technology cannot produce such a device, which mandates the advancement of the state of the art in areas such as analog microelectronics, wireless power and data transfer, packaging, and stimulating electrodes.
Collapse
Affiliation(s)
- James D Weiland
- Doheny Retina Institute, Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA.
| | | | | |
Collapse
|
40
|
Choi T, Merolla P, Arthur J, Boahen K, Shi B. Neuromorphic implementation of orientation hypercolumns. ACTA ACUST UNITED AC 2005. [DOI: 10.1109/tcsi.2005.849136] [Citation(s) in RCA: 91] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
41
|
Zaghloul K, Boahen K. An ON-OFF log domain circuit that recreates adaptive filtering in the retina. ACTA ACUST UNITED AC 2005. [DOI: 10.1109/tcsi.2004.840097] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
42
|
Tsang EKC, Shi BE. A preference for phase-based disparity in a neuromorphic implementation of the binocular energy model. Neural Comput 2004; 16:1579-600. [PMID: 15228746 DOI: 10.1162/089976604774201604] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. This letter describes an electronic implementation of a single binocularly tuned complex cell based on the binocular energy model, which has been proposed to model disparity-tuned complex cells in the mammalian primary visual cortex. Our system consists of two silicon retinas representing the left and right eyes, two silicon chips containing retinotopic arrays of spiking neurons with monocular Gabor-type spatial receptive fields, and logic circuits that combine the spike outputs to compute a disparity-selective complex cell response. The tuned disparity can be adjusted electronically by introducing either position or phase shifts between the monocular receptive field profiles. Mismatch between the monocular receptive field profiles caused by transistor mismatch can degrade the relative responses of neurons tuned to different disparities. In our system, the relative responses between neurons tuned by phase encoding are better matched than neurons tuned by position encoding. Our numerical sensitivity analysis indicates that the relative responses of phase-encoded neurons that are least sensitive to the receptive field parameters vary the most in our system. We conjecture that this robustness may be one reason for the existence of phase-encoded disparity-tuned neurons in biological neural systems.
Collapse
Affiliation(s)
- Eric K C Tsang
- Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong.
| | | |
Collapse
|
43
|
Zaghloul KA, Boahen K. Optic Nerve Signals in a Neuromorphic Chip I: Outer and Inner Retina Models. IEEE Trans Biomed Eng 2004; 51:657-66. [PMID: 15072220 DOI: 10.1109/tbme.2003.821039] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present a novel model for the mammalian retina and analyze its behavior. Our outer retina model performs bandpass spatiotemporal filtering. It is comprised of two reciprocally connected resistive grids that model the cone and horizontal cell syncytia. We show analytically that its sensitivity is proportional to the space-constant-ratio of the two grids while its half-max response is set by the local average intensity. Thus, this outer retina model realizes luminance adaptation. Our inner retina model performs high-pass temporal filtering. It features slow negative feedback whose strength is modulated by a locally computed measure of temporal contrast, modeling two kinds of amacrine cells, one narrow-field, the other wide-field. We show analytically that, when the input is spectrally pure, the corner-frequency tracks the input frequency. But when the input is broadband, the corner frequency is proportional to contrast. Thus, this inner retina model realizes temporal frequency adaptation as well as contrast gain control. We present CMOS circuit designs for our retina model in this paper as well. Experimental measurements from the fabricated chip, and validation of our analytical results, are presented in the companion paper [Zaghloul and Boahen (2004)].
Collapse
Affiliation(s)
- Kareem A Zaghloul
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
44
|
Abstract
Seeking to match the brain's computational efficiency, we draw inspiration from its neural circuits. To model the four main output (ganglion) cell types found in the retina, we morphed outer and inner retina circuits into a 96 x 60-photoreceptor, 3.5 x 3.3 mm2, 0.35 microm-CMOS chip. Our retinomorphic chip produces spike trains for 3600 ganglion cells (GCs), and consumes 62.7 mW at 45 spikes/s/GC. This chip, which is the first silicon retina to successfully model inner retina circuitry, approaches the spatial density of the retina. We present experimental measurements showing that the chip's subthreshold current-mode circuits realize luminance adaptation, bandpass spatiotemporal filtering, temporal adaptation and contrast gain control. The four different GC outputs produced by our chip encode light onset or offset in a sustained or transient fashion, producing a quadrature-like representation. The retinomorphic chip's circuit design is described in a companion paper [Zaghloul and Boahen (2004)].
Collapse
Affiliation(s)
- Kareem A Zaghloul
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | | |
Collapse
|