1
|
Alfihed S, Majrashi M, Ansary M, Alshamrani N, Albrahim SH, Alsolami A, Alamari HA, Zaman A, Almutairi D, Kurdi A, Alzaydi MM, Tabbakh T, Al-Otaibi F. Non-Invasive Brain Sensing Technologies for Modulation of Neurological Disorders. BIOSENSORS 2024; 14:335. [PMID: 39056611 PMCID: PMC11274405 DOI: 10.3390/bios14070335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/28/2024]
Abstract
The non-invasive brain sensing modulation technology field is experiencing rapid development, with new techniques constantly emerging. This study delves into the field of non-invasive brain neuromodulation, a safer and potentially effective approach for treating a spectrum of neurological and psychiatric disorders. Unlike traditional deep brain stimulation (DBS) surgery, non-invasive techniques employ ultrasound, electrical currents, and electromagnetic field stimulation to stimulate the brain from outside the skull, thereby eliminating surgery risks and enhancing patient comfort. This study explores the mechanisms of various modalities, including transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS), highlighting their potential to address chronic pain, anxiety, Parkinson's disease, and depression. We also probe into the concept of closed-loop neuromodulation, which personalizes stimulation based on real-time brain activity. While we acknowledge the limitations of current technologies, our study concludes by proposing future research avenues to advance this rapidly evolving field with its immense potential to revolutionize neurological and psychiatric care and lay the foundation for the continuing advancement of innovative non-invasive brain sensing technologies.
Collapse
Affiliation(s)
- Salman Alfihed
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Majed Majrashi
- Bioengineering Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia
| | - Muhammad Ansary
- Neuroscience Center Research Unit, King Faisal Specialist Hospital and Research Centre, Riyadh 11211, Saudi Arabia
| | - Naif Alshamrani
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Shahad H. Albrahim
- Bioengineering Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia
| | - Abdulrahman Alsolami
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Hala A. Alamari
- Bioengineering Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia
| | - Adnan Zaman
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Dhaifallah Almutairi
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Abdulaziz Kurdi
- Advanced Materials Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia;
| | - Mai M. Alzaydi
- Bioengineering Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia
| | - Thamer Tabbakh
- Microelectronics and Semiconductor Institute, King Abdulaziz City for Science and Technology (KACST), Riyadh 11442, Saudi Arabia; (S.A.)
| | - Faisal Al-Otaibi
- Neuroscience Center Research Unit, King Faisal Specialist Hospital and Research Centre, Riyadh 11211, Saudi Arabia
| |
Collapse
|
2
|
Christenson MP, Sanz Diez A, Heath SL, Saavedra-Weisenhaus M, Adachi A, Nern A, Abbott LF, Behnia R. Hue selectivity from recurrent circuitry in Drosophila. Nat Neurosci 2024; 27:1137-1147. [PMID: 38755272 PMCID: PMC11537989 DOI: 10.1038/s41593-024-01640-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 04/04/2024] [Indexed: 05/18/2024]
Abstract
In the perception of color, wavelengths of light reflected off objects are transformed into the derived quantities of brightness, saturation and hue. Neurons responding selectively to hue have been reported in primate cortex, but it is unknown how their narrow tuning in color space is produced by upstream circuit mechanisms. We report the discovery of neurons in the Drosophila optic lobe with hue-selective properties, which enables circuit-level analysis of color processing. From our analysis of an electron microscopy volume of a whole Drosophila brain, we construct a connectomics-constrained circuit model that accounts for this hue selectivity. Our model predicts that recurrent connections in the circuit are critical for generating hue selectivity. Experiments using genetic manipulations to perturb recurrence in adult flies confirm this prediction. Our findings reveal a circuit basis for hue selectivity in color vision.
Collapse
Affiliation(s)
- Matthias P Christenson
- Zuckerman Institute, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Alvaro Sanz Diez
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Sarah L Heath
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Maia Saavedra-Weisenhaus
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Atsuko Adachi
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - L F Abbott
- Zuckerman Institute, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Rudy Behnia
- Zuckerman Institute, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA.
| |
Collapse
|
3
|
Jha A, Ashwood ZC, Pillow JW. Active Learning for Discrete Latent Variable Models. Neural Comput 2024; 36:437-474. [PMID: 38363661 DOI: 10.1162/neco_a_01646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/13/2023] [Indexed: 02/18/2024]
Abstract
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
Collapse
Affiliation(s)
- Aditi Jha
- Princeton Neuroscience Institute and Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Zoe C Ashwood
- Princeton Neuroscience Institute and Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
4
|
Closed-Loop Systems in Neuromodulation. Neurosurg Clin N Am 2022; 33:297-303. [DOI: 10.1016/j.nec.2022.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
5
|
Nonlinear Spatial Integration Underlies the Diversity of Retinal Ganglion Cell Responses to Natural Images. J Neurosci 2021; 41:3479-3498. [PMID: 33664129 PMCID: PMC8051676 DOI: 10.1523/jneurosci.3075-20.2021] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 02/06/2023] Open
Abstract
How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. We found that standard linear receptive field models yielded good predictions of responses to flashed natural images for a subset of cells but failed to capture the spiking activity for many others. Cells with poor model performance displayed pronounced sensitivity to fine spatial contrast and local signal rectification as the dominant nonlinearity. By contrast, sensitivity to high-frequency contrast-reversing gratings, a classical test for nonlinear spatial integration, was not a good predictor of model performance and thus did not capture the variability of nonlinear spatial integration under natural images. In addition, we also observed a class of nonlinear ganglion cells with inverse tuning for spatial contrast, responding more strongly to spatially homogeneous than to spatially structured stimuli. These findings highlight the diversity of receptive field nonlinearities as a crucial component for understanding early sensory encoding in the context of natural stimuli. SIGNIFICANCE STATEMENT Experiments with artificial visual stimuli have revealed that many types of retinal ganglion cells pool spatial input signals nonlinearly. However, it is still unclear how relevant this nonlinear spatial integration is when the input signals are natural images. Here we analyze retinal responses to natural scenes in large populations of mouse ganglion cells. We show that nonlinear spatial integration strongly influences responses to natural images for some ganglion cells, but not for others. Cells with nonlinear spatial integration were sensitive to spatial structure inside their receptive fields, and a small group of cells displayed a surprising sensitivity to spatially homogeneous stimuli. Traditional analyses with contrast-reversing gratings did not predict this variability of nonlinear spatial integration under natural images.
Collapse
|
6
|
Paiton DM, Frye CG, Lundquist SY, Bowen JD, Zarcone R, Olshausen BA. Selectivity and robustness of sparse coding networks. J Vis 2020; 20:10. [PMID: 33237290 PMCID: PMC7691792 DOI: 10.1167/jov.20.12.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigate how the population nonlinearities resulting from lateral inhibition and thresholding in sparse coding networks influence neural response selectivity and robustness. We show that when compared to pointwise nonlinear models, such population nonlinearities improve the selectivity to a preferred stimulus and protect against adversarial perturbations of the input. These findings are predicted from the geometry of the single-neuron iso-response surface, which provides new insight into the relationship between selectivity and adversarial robustness. Inhibitory lateral connections curve the iso-response surface outward in the direction of selectivity. Since adversarial perturbations are orthogonal to the iso-response surface, adversarial attacks tend to be aligned with directions of selectivity. Consequently, the network is less easily fooled by perceptually irrelevant perturbations to the input. Together, these findings point to benefits of integrating computational principles found in biological vision systems into artificial neural networks.
Collapse
Affiliation(s)
- Dylan M Paiton
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,
| | - Charles G Frye
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,
| | - Sheng Y Lundquist
- Department of Computer Science, Portland State University, Portland, OR, USA.,
| | - Joel D Bowen
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,
| | - Ryan Zarcone
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Biophysics, University of California Berkeley, Berkeley, CA, USA.,
| | - Bruno A Olshausen
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,
| |
Collapse
|
7
|
Zanos S. Closed-Loop Neuromodulation in Physiological and Translational Research. Cold Spring Harb Perspect Med 2019; 9:cshperspect.a034314. [PMID: 30559253 DOI: 10.1101/cshperspect.a034314] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Neuromodulation, the focused delivery of energy to neural tissue to affect neural or physiological processes, is a common method to study the physiology of the nervous system. It is also successfully used as treatment for disorders in which the nervous system is affected or implicated. Typically, neurostimulation is delivered in open-loop mode (i.e., according to a predetermined schedule and independently of the state of the organ or physiological system whose function is sought to be modulated). However, the physiology of the nervous system or the modulated organ can be dynamic, and the same stimulus may have different effects depending on the underlying state. As a result, open-loop stimulation may fail to restore the desired function or cause side effects. In such cases, a neuromodulation intervention may be preferable to be administered in closed-loop mode. In a closed-loop neuromodulation (CLN) system, stimulation is delivered when certain physiological states or conditions are met (responsive neurostimulation); the stimulation parameters can also be adjusted dynamically to optimize the effect of stimulation in real time (adaptive neurostimulation). In this review, the reasons and the conditions for using CLN are discussed, the basic components of a CLN system are described, and examples of CLN systems used in physiological and translational research are presented.
Collapse
Affiliation(s)
- Stavros Zanos
- Translational Neurophysiology Laboratory, Center for Bioelectronic Medicine, Feinstein Institute for Medical Research, Northwell Health, Manhasset, New York 11030
| |
Collapse
|
8
|
Eberhardt F, Herz AVM, Häusler S. Tuft dendrites of pyramidal neurons operate as feedback-modulated functional subunits. PLoS Comput Biol 2019; 15:e1006757. [PMID: 30840615 PMCID: PMC6402658 DOI: 10.1371/journal.pcbi.1006757] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 01/04/2019] [Indexed: 01/23/2023] Open
Abstract
Dendrites of pyramidal cells exhibit complex morphologies and contain a variety of ionic conductances, which generate non-trivial integrative properties. Basal and proximal apical dendrites have been shown to function as independent computational subunits within a two-layer feedforward processing scheme. The outputs of the subunits are linearly summed and passed through a final non-linearity. It is an open question whether this mathematical abstraction can be applied to apical tuft dendrites as well. Using a detailed compartmental model of CA1 pyramidal neurons and a novel theoretical framework based on iso-response methods, we first show that somatic sub-threshold responses to brief synaptic inputs cannot be described by a two-layer feedforward model. Then, we relax the core assumption of subunit independence and introduce non-linear feedback from the output layer to the subunit inputs. We find that additive feedback alone explains the somatic responses to synaptic inputs to most of the branches in the apical tuft. Individual dendritic branches bidirectionally modulate the thresholds of their input-output curves without significantly changing the gains. In contrast to these findings for precisely timed inputs, we show that neuronal computations based on firing rates can be accurately described by purely feedforward two-layer models. Our findings support the view that dendrites of pyramidal neurons possess non-linear analog processing capabilities that critically depend on the location of synaptic inputs. The iso-response framework proposed in this computational study is highly efficient and could be directly applied to biological neurons. Pyramidal neurons are the principal cell type in the cerebral cortex. Revealing how these cells operate is key to understanding the dynamics and computations of cortical circuits. However, it is still a matter of debate how pyramidal neurons transform their synaptic inputs into spike outputs. Recent studies have proposed that individual dendritic branches or subtrees may function as independent computational subunits. Although experimental work consolidated this abstraction for basal and proximal apical dendrites, a rigorous test for tuft dendrites is still missing. By carrying out a computational study we demonstrate that dendritic branches in the tuft do not form independent subunits, however, their integrative properties can be captured by a model that incorporates modulatory feedback between these subunits. This conclusion has been reached using a novel theoretical framework that can be directly integrated into multi-electrode or photo-stimulation paradigms to reveal the dendritic computations of biological neurons.
Collapse
Affiliation(s)
- Florian Eberhardt
- Bernstein Center for Computational Neuroscience Munich, Germany
- Faculty of Biology, Ludwig-Maximilians-Universität München, Germany
| | - Andreas V. M. Herz
- Bernstein Center for Computational Neuroscience Munich, Germany
- Faculty of Biology, Ludwig-Maximilians-Universität München, Germany
| | - Stefan Häusler
- Bernstein Center for Computational Neuroscience Munich, Germany
- Faculty of Biology, Ludwig-Maximilians-Universität München, Germany
- * E-mail:
| |
Collapse
|
9
|
Doruk RO, Zhang K. Adaptive Stimulus Design for Dynamic Recurrent Neural Network Models. Front Neural Circuits 2019; 12:119. [PMID: 30723397 PMCID: PMC6349832 DOI: 10.3389/fncir.2018.00119] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 12/17/2018] [Indexed: 11/26/2022] Open
Abstract
We present an adaptive stimulus design method for efficiently estimating the parameters of a dynamic recurrent network model with interacting excitatory and inhibitory neuronal populations. Although stimuli that are optimized for model parameter estimation should, in theory, have advantages over nonadaptive random stimuli, in practice it remains unclear in what way and to what extent an optimal design of time-varying stimuli may actually improve parameter estimation for this common type of recurrent network models. Here we specified the time course of each stimulus by a Fourier series whose amplitudes and phases were determined by maximizing a utility function based on the Fisher information matrix. To facilitate the optimization process, we have derived differential equations that govern the time evolution of the gradients of the utility function with respect to the stimulus parameters. The network parameters were estimated by maximum likelihood from the spike train data generated by an inhomogeneous Poisson process from the continuous network state. The adaptive design process was repeated in a closed loop, alternating between optimal stimulus design and parameter estimation from the updated stimulus-response data. Our results confirmed that, compared with random stimuli, optimally designed stimuli elicited responses with significantly better likelihood values for parameter estimation. Furthermore, all individual parameters, including the time constants and the connection weights, were recovered more accurately by the optimal design method. We also examined how the errors of different parameter estimates were correlated, and proposed heuristic formulas to account for the correlation patterns by an approximate parameter-confounding theory. Our results suggest that although adaptive optimal stimulus design incurs considerable computational cost even for the simplest excitatory-inhibitory recurrent network model, it may potentially help save time in experiments by reducing the number of stimuli needed for network parameter estimation.
Collapse
Affiliation(s)
- R. Ozgur Doruk
- Department of Electrical and Electronic Engineering, Atilim University, Golbasi, Turkey
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Kechen Zhang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
10
|
Seu GP, Angotzi GN, Boi F, Raffo L, Berdondini L, Meloni P. Exploiting All Programmable SoCs in Neural Signal Analysis: A Closed-Loop Control for Large-Scale CMOS Multielectrode Arrays. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:839-850. [PMID: 29993584 DOI: 10.1109/tbcas.2018.2830659] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Microelectrode array (MEA) systems with up to several thousands of recording electrodes and electrical or optical stimulation capabilities are commercially available or described in the literature. By exploiting their submillisecond and micrometric temporal and spatial resolutions to record bioelectrical signals, such emerging MEA systems are increasingly used in neuroscience to study the complex dynamics of neuronal networks and brain circuits. However, they typically lack the capability of implementing real-time feedback between the detection of neuronal spiking events and stimulation, thus restricting large-scale neural interfacing to open-loop conditions. In order to exploit the potential of such large-scale recording systems and stimulation, we designed and validated a fully reconfigurable FPGA-based processing system for closed-loop multichannel control. By adopting a Xilinx Zynq-all-programmable system on chip that integrates reconfigurable logic and a dual-core ARM-based processor on the same device, the proposed platform permits low-latency preprocessing (filtering and detection) of spikes acquired simultaneously from several thousands of electrode sites. To demonstrate the proposed platform, we tested its performances through ex vivo experiments on the mice retina using a state-of-the-art planar high-density MEA that samples 4096 electrodes at 18 kHz and record light-evoked spikes from several thousands of retinal ganglion cells simultaneously. Results demonstrate that the platform is able to provide a total latency from whole-array data acquisition to stimulus generation below 2 ms. This opens the opportunity to design closed-loop experiments on neural systems and biomedical applications using emerging generations of planar or implantable large-scale MEA systems.
Collapse
|
11
|
Potter SM, El Hady A, Fetz EE. Closed-loop neuroscience and neuroengineering. Front Neural Circuits 2014; 8:115. [PMID: 25294988 PMCID: PMC4171982 DOI: 10.3389/fncir.2014.00115] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2013] [Accepted: 09/01/2014] [Indexed: 01/18/2023] Open
Affiliation(s)
- Steve M Potter
- Laboratory for Neuroengineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology Atlanta, GA, USA
| | - Ahmed El Hady
- Department of Non Linear Dynamics, Max Planck Institute for Dynamics and Self Organization Goettingen, Germany
| | - Eberhard E Fetz
- Departments of Physiology and Biophysics and Bioengineering, Washington National Primate Research Center, University of Washington Seattle, WA, USA
| |
Collapse
|
12
|
Abstract
Throughout different sensory systems, individual neurons integrate incoming signals over their receptive fields. The characteristics of this signal integration are crucial determinants for the neurons' functions. For ganglion cells in the vertebrate retina, receptive fields are characterized by the well-known center-surround structure and, although several studies have addressed spatial integration in the receptive field center, little is known about how visual signals are integrated in the surround. Therefore, we set out here to characterize signal integration and to identify relevant nonlinearities in the receptive field surround of ganglion cells in the isolated salamander retina by recording spiking activity with extracellular electrodes under visual stimulation of the center and surround. To quantify nonlinearities of spatial integration independently of subsequent nonlinearities of spike generation, we applied the technique of iso-response measurements as follows: using closed-loop experiments, we searched for different stimulus patterns in the surround that all reduced the center-evoked spiking activity by the same amount. The identified iso-response stimuli revealed strongly nonlinear spatial integration in the receptive field surrounds of all recorded cells. Furthermore, cell types that had been shown previously to have different nonlinearities in receptive field centers showed similar surround nonlinearities but differed systematically in the adaptive characteristics of the surround. Finally, we found that there is an optimal spatial scale of surround suppression; suppression was most effective when surround stimulation was organized into subregions of several hundred micrometers in diameter, indicating that the surround is composed of subunits that have strong center-surround organization themselves.
Collapse
|