1
|
Cone JJ, Mitchell AO, Parker RK, Maunsell JHR. Stimulus-dependent differences in cortical versus subcortical contributions to visual detection in mice. Curr Biol 2024; 34:1940-1952.e5. [PMID: 38640924 PMCID: PMC11080572 DOI: 10.1016/j.cub.2024.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 02/08/2024] [Accepted: 03/27/2024] [Indexed: 04/21/2024]
Abstract
The primary visual cortex (V1) and the superior colliculus (SC) both occupy stations early in the processing of visual information. They have long been thought to perform distinct functions, with the V1 supporting the perception of visual features and the SC regulating orienting to visual inputs. However, growing evidence suggests that the SC supports the perception of many of the same visual features traditionally associated with the V1. To distinguish V1 and SC contributions to visual processing, it is critical to determine whether both areas causally contribute to the detection of specific visual stimuli. Here, mice reported changes in visual contrast or luminance near their perceptual threshold while white noise patterns of optogenetic stimulation were delivered to V1 or SC inhibitory neurons. We then performed a reverse correlation analysis on the optogenetic stimuli to estimate a neuronal-behavioral kernel (NBK), a moment-to-moment estimate of the impact of V1 or SC inhibition on stimulus detection. We show that the earliest moments of stimulus-evoked activity in the SC are critical for the detection of both luminance and contrast changes. Strikingly, there was a robust stimulus-aligned modulation in the V1 contrast-detection NBK but no sign of a comparable modulation for luminance detection. The data suggest that behavioral detection of visual contrast depends on both V1 and SC spiking, whereas mice preferentially use SC activity to detect changes in luminance. Electrophysiological recordings showed that neurons in both the SC and V1 responded strongly to both visual stimulus types, while the reverse correlation analysis reveals when these neuronal signals actually contribute to visually guided behaviors.
Collapse
Affiliation(s)
- Jackson J Cone
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA.
| | - Autumn O Mitchell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - Rachel K Parker
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - John H R Maunsell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| |
Collapse
|
2
|
Campbell A, Tanaka JW. Fast saccades to faces during the feedforward sweep. J Vis 2024; 24:16. [PMID: 38630459 PMCID: PMC11037494 DOI: 10.1167/jov.24.4.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 09/19/2023] [Indexed: 04/19/2024] Open
Abstract
Saccadic choice tasks use eye movements as a response method, typically in a task where observers are asked to saccade as quickly as possible to an image of a prespecified target category. Using this approach, face-selective saccades have been observed within 100 ms poststimulus. When taking into account oculomotor processing, this suggests that faces can be detected in as little as 70 to 80 ms. It has therefore been suggested that face detection must occur during the initial feedforward sweep, since this latency leaves little time for feedback processing. In the current experiment, we tested this hypothesis using backward masking-a technique shown to primarily disrupt feedback processing while leaving feedforward activation relatively intact. Based on minimum saccadic reaction time, we found that face detection benefited from ultra-fast, accurate saccades within 110 to 160 ms and that these eye movements are obtainable even under extreme masking conditions that limit perceptual awareness. However, masking did significantly increase the median SRT for faces. In the manual responses, we found remarkable detection accuracy for faces and houses, even when participants indicated having no visual experience of the test images. These results provide evidence for the view that the saccadic bias to faces is initiated by coarse information used to categorize faces in the feedforward sweep but that, in most cases, additional processing is required to quickly reach the threshold for saccade initiation.
Collapse
Affiliation(s)
- Alison Campbell
- Department of Psychology, University of Victoria, Victoria, BC, Canada
- https://orcid.org/0000-0001-6891-8609
| | - James W Tanaka
- Department of Psychology, University of Victoria, Victoria, BC, Canada
- https://orcid.org/0000-0001-6559-0388
| |
Collapse
|
3
|
Westerberg JA, Schall JD, Woodman GF, Maier A. Feedforward attentional selection in sensory cortex. Nat Commun 2023; 14:5993. [PMID: 37752171 PMCID: PMC10522696 DOI: 10.1038/s41467-023-41745-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/15/2023] [Indexed: 09/28/2023] Open
Abstract
Salient objects grab attention because they stand out from their surroundings. Whether this phenomenon is accomplished by bottom-up sensory processing or requires top-down guidance is debated. We tested these alternative hypotheses by measuring how early and in which cortical layer(s) neural spiking distinguished a target from a distractor. We measured synaptic and spiking activity across cortical columns in mid-level area V4 of male macaque monkeys performing visual search for a color singleton. A neural signature of attentional capture was observed in the earliest response in the input layer 4. The magnitude of this response predicted response time and accuracy. Errant behavior followed errant selection. Because this response preceded top-down influences and arose in the cortical layer not targeted by top-down connections, these findings demonstrate that feedforward activation of sensory cortex can underlie attentional priority.
Collapse
Affiliation(s)
- Jacob A Westerberg
- Department of Psychology, Vanderbilt University, Nashville, TN, 37240, USA.
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, 37240, USA.
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, 37240, USA.
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, 1105 BA, Amsterdam, The Netherlands.
| | - Jeffrey D Schall
- Centre for Vision Research, York University, Toronto, ON, M3J 1P3, Canada
- Vision: Science to Applications Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology, York University, Toronto, ON, M3J 1P3, Canada
| | - Geoffrey F Woodman
- Department of Psychology, Vanderbilt University, Nashville, TN, 37240, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, 37240, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, 37240, USA
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN, 37240, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, 37240, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, 37240, USA
| |
Collapse
|
4
|
Peterson MA, Campbell ES. Backward masking implicates cortico-cortical recurrent processes in convex figure context effects and cortico-thalamic recurrent processes in resolving figure-ground ambiguity. Front Psychol 2023; 14:1243405. [PMID: 37809293 PMCID: PMC10552270 DOI: 10.3389/fpsyg.2023.1243405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 08/17/2023] [Indexed: 10/10/2023] Open
Abstract
Introduction Previous experiments purportedly showed that image-based factors like convexity were sufficient for figure assignment. Recently, however, we found that the probability of perceiving a figure on the convex side of a central border was only slightly higher than chance for two-region displays and increased with the number of display regions; this increase was observed only when the concave regions were homogeneously colored. These convex figure context effects (CEs) revealed that figure assignment in these classic displays entails more than a response to local convexity. A Bayesian observer replicated the convex figure CEs using both a convexity object prior and a new, homogeneous background prior and made the novel prediction that the classic displays in which both the convex and concave regions were homogeneous were ambiguous during perceptual organization. Methods Here, we report three experiments investigating the proposed ambiguity and examining how the convex figure CEs unfold over time with an emphasis on whether they entail recurrent processing. Displays were shown for 100 ms followed by pattern masks after ISIs of 0, 50, or 100 ms. The masking conditions were designed to add noise to recurrent processing and therefore to delay the outcome of processes in which they play a role. In Exp. 1, participants viewed two- and eight-region displays with homogeneous convex regions (homo-convex displays; the putatively ambiguous displays). In Exp. 2, participants viewed putatively unambiguous hetero-convex displays. In Exp. 3, displays and masks were presented to different eyes, thereby delaying mask interference in the thalamus for up to 100 ms. Results and discussion The results of Exps. 1 and 2 are consistent with the interpretation that recurrent processing is involved in generating the convex figure CEs and resolving the ambiguity of homo-convex displays. The results of Exp. 3 suggested that corticofugal recurrent processing is involved in resolving the ambiguity of homo-convex displays and that cortico-cortical recurrent processes play a role in generating convex figure CEs and these two types of recurrent processes operate in parallel. Our results add to evidence that perceptual organization evolves dynamically and reveal that stimuli that seem unambiguous can be ambiguous during perceptual organization.
Collapse
Affiliation(s)
- Mary A. Peterson
- Department of Psychology, University of Arizona, Tucson, AZ, United States
- Cognitive Science Program, University of Arizona, Tucson, AZ, United States
| | - Elizabeth Salvagio Campbell
- Department of Psychology, University of Arizona, Tucson, AZ, United States
- Cognitive Science Program, University of Arizona, Tucson, AZ, United States
- College of Medicine Tucson, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
5
|
Cone JJ, Mitchell AO, Parker RK, Maunsell JHR. Temporal weighting of cortical and subcortical spikes reveals stimulus dependent differences in their contributions to behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.23.554473. [PMID: 37662213 PMCID: PMC10473714 DOI: 10.1101/2023.08.23.554473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The primary visual cortex (V1) and the superior colliculus (SC) both occupy stations early in the processing of visual information. They have long been thought to perform distinct functions, with V1 supporting perception of visual features and the SC regulating orienting to visual inputs. However, growing evidence suggests that the SC supports perception of many of the same visual features traditionally associated with V1. To distinguish V1 and SC contributions to visual processing, it is critical to determine whether both areas causally contribute to perception of specific visual stimuli. Here, mice reported changes in visual contrast or luminance near perceptual threshold while we presented white noise patterns of optogenetic stimulation to V1 or SC inhibitory neurons. We then performed a reverse correlation analysis on the optogenetic stimuli to estimate a neuronal-behavioral kernel (NBK), a moment-to-moment estimate of the impact of V1 or SC inhibition on stimulus detection. We show that the earliest moments of stimulus-evoked activity in SC are critical for detection of both luminance or contrast changes. Strikingly, there was a robust stimulus-aligned modulation in the V1 contrast-detection NBK, but no sign of a comparable modulation for luminance detection. The data suggest that perception of visual contrast depends on both V1 and SC spiking, whereas mice preferentially use SC activity to detect changes in luminance. Electrophysiological recordings showed that neurons in both SC and V1 responded strongly to both visual stimulus types, while the reverse correlation analysis reveals when these neuronal signals actually contribute to visually-guided behaviors.
Collapse
|
6
|
Wilson M, Hecker L, Joos E, Aertsen A, Tebartz van Elst L, Kornmeier J. Spontaneous Necker-cube reversals may not be that spontaneous. Front Hum Neurosci 2023; 17:1179081. [PMID: 37323933 PMCID: PMC10268006 DOI: 10.3389/fnhum.2023.1179081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 04/28/2023] [Indexed: 06/17/2023] Open
Abstract
Introduction During observation of the ambiguous Necker cube, our perception suddenly reverses between two about equally possible 3D interpretations. During passive observation, perceptual reversals seem to be sudden and spontaneous. A number of theoretical approaches postulate destabilization of neural representations as a pre-condition for reversals of ambiguous figures. In the current study, we focused on possible Electroencephalogram (EEG) correlates of perceptual destabilization, that may allow prediction of an upcoming perceptual reversal. Methods We presented ambiguous Necker cube stimuli in an onset-paradigm and investigated the neural processes underlying endogenous reversals as compared to perceptual stability across two consecutive stimulus presentations. In a separate experimental condition, disambiguated cube variants were alternated randomly, to exogenously induce perceptual reversals. We compared the EEG immediately before and during endogenous Necker cube reversals with corresponding time windows during exogenously induced perceptual reversals of disambiguated cube variants. Results For the ambiguous Necker cube stimuli, we found the earliest differences in the EEG between reversal trials and stability trials already 1 s before a reversal occurred, at bilateral parietal electrodes. The traces remained similar until approximately 1100 ms before a perceived reversal, became maximally different at around 890 ms (p = 7.59 × 10-6, Cohen's d = 1.35) and remained different until shortly before offset of the stimulus preceding the reversal. No such patterns were found in the case of disambiguated cube variants. Discussion The identified EEG effects may reflect destabilized states of neural representations, related to destabilized perceptual states preceding a perceptual reversal. They further indicate that spontaneous Necker cube reversals are most probably not as spontaneous as generally thought. Rather, the destabilization may occur over a longer time scale, at least 1 s before a reversal event, despite the reversal event as such being perceived as spontaneous by the viewer.
Collapse
Affiliation(s)
- Mareike Wilson
- Department of Psychiatry and Psychotherapy, Medical Center – University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Lukas Hecker
- Department of Psychiatry and Psychotherapy, Medical Center – University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Faculty of Biology, University of Freiburg, Freiburg, Germany
- Department of Psychosomatic Medicine and Psychotherapy, Medical Center – University of Freiburg, Freiburg, Germany
| | - Ellen Joos
- INSERM U1114, Cognitive Neuropsychology and Pathophysiology of Schizophrenia, Strasbourg, France
| | - Ad Aertsen
- Faculty of Biology, University of Freiburg, Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Ludger Tebartz van Elst
- Department of Psychiatry and Psychotherapy, Medical Center – University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jürgen Kornmeier
- Department of Psychiatry and Psychotherapy, Medical Center – University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Faculty of Biology, University of Freiburg, Freiburg, Germany
| |
Collapse
|
7
|
Swindale NV, Spacek MA, Krause M, Mitelut C. Spontaneous activity in cortical neurons is stereotyped and non-Poisson. Cereb Cortex 2023; 33:6508-6525. [PMID: 36708015 PMCID: PMC10233306 DOI: 10.1093/cercor/bhac521] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 12/09/2022] [Accepted: 12/10/2022] [Indexed: 01/29/2023] Open
Abstract
Neurons fire even in the absence of sensory stimulation or task demands. Numerous theoretical studies have modeled this spontaneous activity as a Poisson process with uncorrelated intervals between successive spikes and a variance in firing rate equal to the mean. Experimental tests of this hypothesis have yielded variable results, though most have concluded that firing is not Poisson. However, these tests say little about the ways firing might deviate from randomness. Nor are they definitive because many different distributions can have equal means and variances. Here, we characterized spontaneous spiking patterns in extracellular recordings from monkey, cat, and mouse cerebral cortex neurons using rate-normalized spike train autocorrelation functions (ACFs) and a logarithmic timescale. If activity was Poisson, this function should be flat. This was almost never the case. Instead, ACFs had diverse shapes, often with characteristic peaks in the 1-700 ms range. Shapes were stable over time, up to the longest recording periods used (51 min). They did not fall into obvious clusters. ACFs were often unaffected by visual stimulation, though some abruptly changed during brain state shifts. These behaviors may have their origin in the intrinsic biophysics and dendritic anatomy of the cells or in the inputs they receive.
Collapse
Affiliation(s)
- Nicholas V Swindale
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 2550 Willow St., Vancouver, BC V5Z 3N9, Canada
| | - Martin A Spacek
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Matthew Krause
- Montreal Neurological Institute, McGill University, 3801 University St., Montreal, QC H3A 2B4, Canada
| | - Catalin Mitelut
- Institute of Molecular and Clinical Ophthalmology, University of Basel, Mittlere Strasse 91, CH-4031 Basel, Switzerland
| |
Collapse
|
8
|
Digital computing through randomness and order in neural networks. Proc Natl Acad Sci U S A 2022; 119:e2115335119. [PMID: 35947616 PMCID: PMC9388095 DOI: 10.1073/pnas.2115335119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We propose that coding and decoding in the brain are achieved through digital computation using three principles: relative ordinal coding of inputs, random connections between neurons, and belief voting. Due to randomization and despite the coarseness of the relative codes, we show that these principles are sufficient for coding and decoding sequences with error-free reconstruction. In particular, the number of neurons needed grows linearly with the size of the input repertoire growing exponentially. We illustrate our model by reconstructing sequences with repertoires on the order of a billion items. From this, we derive the Shannon equations for the capacity limit to learn and transfer information in the neural population, which is then generalized to any type of neural network. Following the maximum entropy principle of efficient coding, we show that random connections serve to decorrelate redundant information in incoming signals, creating more compact codes for neurons and therefore, conveying a larger amount of information. Henceforth, despite the unreliability of the relative codes, few neurons become necessary to discriminate the original signal without error. Finally, we discuss the significance of this digital computation model regarding neurobiological findings in the brain and more generally with artificial intelligence algorithms, with a view toward a neural information theory and the design of digital neural networks.
Collapse
|
9
|
Day-Cooney J, Cone JJ, Maunsell JHR. Perceptual Weighting of V1 Spikes Revealed by Optogenetic White Noise Stimulation. J Neurosci 2022; 42:3122-3132. [PMID: 35232760 PMCID: PMC8994541 DOI: 10.1523/jneurosci.1736-21.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 01/17/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022] Open
Abstract
During visually guided behaviors, mere hundreds of milliseconds can elapse between a sensory input and its associated behavioral response. How spikes occurring at different times are integrated to drive perception and action remains poorly understood. We delivered random trains of optogenetic stimulation (white noise) to excite inhibitory interneurons in V1 of mice of both sexes while they performed a visual detection task. We then performed a reverse correlation analysis on the optogenetic stimuli to generate a neuronal-behavioral kernel, an unbiased, temporally precise estimate of how suppression of V1 spiking at different moments around the onset of a visual stimulus affects detection of that stimulus. Electrophysiological recordings enabled us to capture the effects of optogenetic stimuli on V1 responsivity and revealed that the earliest stimulus-evoked spikes are preferentially weighted for guiding behavior. These data demonstrate that white noise optogenetic stimulation is a powerful tool for understanding how patterns of spiking in neuronal populations are decoded in generating perception and action.SIGNIFICANCE STATEMENT During visually guided actions, continuous chains of neurons connect our retinas to our motoneurons. To unravel circuit contributions to behavior, it is crucial to establish the relative functional position(s) that different neural structures occupy in processing and relaying the signals that support rapid, precise responses. To address this question, we randomly inhibited activity in mouse V1 throughout the stimulus-response cycle while the animals did many repetitions of a visual task. The period that led to impaired performance corresponded to the earliest stimulus-driven response in V1, with no effect of inhibition immediately before or during late stages of the stimulus-driven response. This approach offers experimenters a powerful method for uncovering the temporal weighting of spikes from stimulus to response.
Collapse
Affiliation(s)
- Julian Day-Cooney
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| | - Jackson J Cone
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| | - John H R Maunsell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| |
Collapse
|
10
|
Susin E, Destexhe A. Integration, coincidence detection and resonance in networks of spiking neurons expressing Gamma oscillations and asynchronous states. PLoS Comput Biol 2021; 17:e1009416. [PMID: 34529655 PMCID: PMC8478196 DOI: 10.1371/journal.pcbi.1009416] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 09/28/2021] [Accepted: 09/02/2021] [Indexed: 12/29/2022] Open
Abstract
Gamma oscillations are widely seen in the awake and sleeping cerebral cortex, but the exact role of these oscillations is still debated. Here, we used biophysical models to examine how Gamma oscillations may participate to the processing of afferent stimuli. We constructed conductance-based network models of Gamma oscillations, based on different cell types found in cerebral cortex. The models were adjusted to extracellular unit recordings in humans, where Gamma oscillations always coexist with the asynchronous firing mode. We considered three different mechanisms to generate Gamma, first a mechanism based on the interaction between pyramidal neurons and interneurons (PING), second a mechanism in which Gamma is generated by interneuron networks (ING) and third, a mechanism which relies on Gamma oscillations generated by pacemaker chattering neurons (CHING). We find that all three mechanisms generate features consistent with human recordings, but that the ING mechanism is most consistent with the firing rate change inside Gamma bursts seen in the human data. We next evaluated the responsiveness and resonant properties of these networks, contrasting Gamma oscillations with the asynchronous mode. We find that for both slowly-varying stimuli and precisely-timed stimuli, the responsiveness is generally lower during Gamma compared to asynchronous states, while resonant properties are similar around the Gamma band. We could not find conditions where Gamma oscillations were more responsive. We therefore predict that asynchronous states provide the highest responsiveness to external stimuli, while Gamma oscillations tend to overall diminish responsiveness. In the awake and attentive brain, the activity of neurons is typically asynchronous and irregular. It also occasionally displays oscillations in the Gamma frequency range (30–90 Hz), which are believed to be involved in information processing. Here, we use computational models to investigate how brain circuits generate oscillations in a manner consistent with microelectrode recordings in humans. We then study how these networks respond to external input, comparing asynchronous and oscillatory states. This is tested according to several paradigms, an integrative mode, where slowly varying inputs are progressively integrated, a coincidence detection mode, where brief inputs are processed according to the phase of the oscillations, and a resonance mode where the network is probed with oscillatory inputs. Surprisingly, we find that in all cases, the presence of Gamma oscillations tends to diminish the responsiveness to external inputs. We discuss possible implications of this responsiveness decrease on information processing and propose new directions for further exploration.
Collapse
Affiliation(s)
- Eduarda Susin
- Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Centre National de la Recherche Scientifique (CNRS), Gif-sur-Yvette, France
- * E-mail:
| | - Alain Destexhe
- Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Centre National de la Recherche Scientifique (CNRS), Gif-sur-Yvette, France
| |
Collapse
|
11
|
Lonnqvist B, Bornet A, Doerig A, Herzog MH. A comparative biology approach to DNN modeling of vision: A focus on differences, not similarities. J Vis 2021; 21:17. [PMID: 34551062 PMCID: PMC8475290 DOI: 10.1167/jov.21.10.17] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 08/26/2021] [Indexed: 11/24/2022] Open
Abstract
Deep neural networks (DNNs) have revolutionized computer science and are now widely used for neuroscientific research. A hot debate has ensued about the usefulness of DNNs as neuroscientific models of the human visual system; the debate centers on to what extent certain shortcomings of DNNs are real failures and to what extent they are redeemable. Here, we argue that the main problem is that we often do not understand which human functions need to be modeled and, thus, what counts as a falsification. Hence, not only is there a problem on the DNN side, but there is also one on the brain side (i.e., with the explanandum-the thing to be explained). For example, should DNNs reproduce illusions? We posit that we can make better use of DNNs by adopting an approach of comparative biology by focusing on the differences, rather than the similarities, between DNNs and humans to improve our understanding of visual information processing in general.
Collapse
Affiliation(s)
- Ben Lonnqvist
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
12
|
Santos-Mayo A, Moratti S, de Echegaray J, Susi G. A Model of the Early Visual System Based on Parallel Spike-Sequence Detection, Showing Orientation Selectivity. BIOLOGY 2021; 10:biology10080801. [PMID: 34440033 PMCID: PMC8389551 DOI: 10.3390/biology10080801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/12/2021] [Accepted: 08/16/2021] [Indexed: 12/22/2022]
Abstract
Simple Summary A computational model of primates’ early visual processing, showing orientation selectivity, is presented. The system importantly integrates two key elements: (1) a neuromorphic spike-decoding structure that considerably resembles the circuitry between layers IV and II/III of the primary visual cortex, both in topology and operation; (2) the plasticity of intrinsic excitability, to embed recent findings about the operation of the same area. The model is proposed as a tool for the analysis and reproduction of the orientation selectivity phenomenon, whose underlying neuronal-level computational mechanisms are today the subject of intense scrutiny. In response to rotated Gabor patches the model is able to exhibit realistic orientation tuning curves and to reproduce responses similar to those found in neurophysiological recordings from the primary visual cortex obtained under the same task, considering different stages of the network. This demonstrates its aptness to capture the mechanisms underlying the evoked response in the primary visual cortex. Our tool is available online, and can be expanded to other experiments using a dedicated software library developed by the authors, to elucidate the computational mechanisms underlying orientation selectivity. Abstract Since the first half of the twentieth century, numerous studies have been conducted on how the visual cortex encodes basic image features. One of the hallmarks of basic feature extraction is the phenomenon of orientation selectivity, of which the underlying neuronal-level computational mechanisms remain partially unclear despite being intensively investigated. In this work we present a reduced visual system model (RVSM) of the first level of scene analysis, involving the retina, the lateral geniculate nucleus and the primary visual cortex (V1), showing orientation selectivity. The detection core of the RVSM is the neuromorphic spike-decoding structure MNSD, which is able to learn and recognize parallel spike sequences and considerably resembles the neuronal microcircuits of V1 in both topology and operation. This structure is equipped with plasticity of intrinsic excitability to embed recent findings about V1 operation. The RVSM, which embeds 81 groups of MNSD arranged in 4 oriented columns, is tested using sets of rotated Gabor patches as input. Finally, synthetic visual evoked activity generated by the RVSM is compared with real neurophysiological signal from V1 area: (1) postsynaptic activity of human subjects obtained by magnetoencephalography and (2) spiking activity of macaques obtained by multi-tetrode arrays. The system is implemented using the NEST simulator. The results attest to a good level of resemblance between the model response and real neurophysiological recordings. As the RVSM is available online, and the model parameters can be customized by the user, we propose it as a tool to elucidate the computational mechanisms underlying orientation selectivity.
Collapse
Affiliation(s)
- Alejandro Santos-Mayo
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
| | - Stephan Moratti
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
- Laboratory of Clinical Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain
| | - Javier de Echegaray
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
| | - Gianluca Susi
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
- Department of Civil Engineering and Computer Science, University of Rome “Tor Vergata”, 00133 Rome, Italy
- Correspondence: ; Tel.: +34-(61)-86893399-79317
| |
Collapse
|
13
|
Seijdel N, Loke J, van de Klundert R, van der Meer M, Quispel E, van Gaal S, de Haan EHF, Scholte HS. On the Necessity of Recurrent Processing during Object Recognition: It Depends on the Need for Scene Segmentation. J Neurosci 2021; 41:6281-6289. [PMID: 34088797 PMCID: PMC8287993 DOI: 10.1523/jneurosci.2851-20.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 04/11/2021] [Accepted: 05/13/2021] [Indexed: 11/21/2022] Open
Abstract
Although feedforward activity may suffice for recognizing objects in isolation, additional visual operations that aid object recognition might be needed for real-world scenes. One such additional operation is figure-ground segmentation, extracting the relevant features and locations of the target object while ignoring irrelevant features. In this study of 60 human participants (female and male), we show objects on backgrounds of increasing complexity to investigate whether recurrent computations are increasingly important for segmenting objects from more complex backgrounds. Three lines of evidence show that recurrent processing is critical for recognition of objects embedded in complex scenes. First, behavioral results indicated a greater reduction in performance after masking objects presented on more complex backgrounds, with the degree of impairment increasing with increasing background complexity. Second, electroencephalography (EEG) measurements showed clear differences in the evoked response potentials between conditions around time points beyond feedforward activity, and exploratory object decoding analyses based on the EEG signal indicated later decoding onsets for objects embedded in more complex backgrounds. Third, deep convolutional neural network performance confirmed this interpretation. Feedforward and less deep networks showed a higher degree of impairment in recognition for objects in complex backgrounds compared with recurrent and deeper networks. Together, these results support the notion that recurrent computations drive figure-ground segmentation of objects in complex scenes.SIGNIFICANCE STATEMENT The incredible speed of object recognition suggests that it relies purely on a fast feedforward buildup of perceptual activity. However, this view is contradicted by studies showing that disruption of recurrent processing leads to decreased object recognition performance. Here, we resolve this issue by showing that how object recognition is resolved and whether recurrent processing is crucial depends on the context in which it is presented. For objects presented in isolation or in simple environments, feedforward activity could be sufficient for successful object recognition. However, when the environment is more complex, additional processing seems necessary to select the elements that belong to the object and by that segregate them from the background.
Collapse
Affiliation(s)
- Noor Seijdel
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Jessica Loke
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Ron van de Klundert
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Matthew van der Meer
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Eva Quispel
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - Edward H F de Haan
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| | - H Steven Scholte
- Department of Psychology, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands
| |
Collapse
|
14
|
O'Reilly RC, Russin JL, Zolfaghar M, Rohrlich J. Deep Predictive Learning in Neocortex and Pulvinar. J Cogn Neurosci 2021; 33:1158-1196. [PMID: 34428793 PMCID: PMC10164227 DOI: 10.1162/jocn_a_01708] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
How do humans learn from raw sensory experience? Throughout life, but most obviously in infancy, we learn without explicit instruction. We propose a detailed biological mechanism for the widely embraced idea that learning is driven by the differences between predictions and actual outcomes (i.e., predictive error-driven learning). Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top-down predictions, and sparse driver inputs from lower areas supply the actual outcome, originating in Layer 5 intrinsic bursting neurons. Thus, the outcome representation is only briefly activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex. This results in a biologically plausible form of error backpropagation learning. We implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal pathway learns to systematically categorize 3-D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs. These categories match human judgments on the same stimuli and are consistent with neural representations in inferotemporal cortex in primates.
Collapse
|
15
|
Moleirinho S, Whalen AJ, Fried SI, Pezaris JS. The impact of synchronous versus asynchronous electrical stimulation in artificial vision. J Neural Eng 2021; 18. [PMID: 33900206 DOI: 10.1088/1741-2552/abecf1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Visual prosthesis devices designed to restore sight to the blind have been under development in the laboratory for several decades. Clinical translation continues to be challenging, due in part to gaps in our understanding of critical parameters such as how phosphenes, the electrically-generated pixels of artificial vision, can be combined to form images. In this review we explore the effects that synchronous and asynchronous electrical stimulation across multiple electrodes have in evoking phosphenes. Understanding how electrical patterns influence phosphene generation to control object binding and perception of visual form is fundamental to creation of a clinically successful prosthesis.
Collapse
Affiliation(s)
- Susana Moleirinho
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America.,Department of Neurosurgery, Harvard Medical School, Boston, MA, United States of America
| | - Andrew J Whalen
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America.,Department of Neurosurgery, Harvard Medical School, Boston, MA, United States of America
| | - Shelley I Fried
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America.,Department of Neurosurgery, Harvard Medical School, Boston, MA, United States of America.,Boston VA Healthcare System, Boston, MA, United States of America
| | - John S Pezaris
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America.,Department of Neurosurgery, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
16
|
Abstract
Some images spontaneously change in appearance. A new study has found that these changes are reflected in high-level visual cortical areas before they become apparent in early sensory cortex. This suggests that visual information not only flows towards interpretative areas of our brain, but also in the reverse direction.
Collapse
Affiliation(s)
- Alexander Maier
- Department of Psychology, Department of Ophthalmology and Visual Sciences, Vanderbilt Vision Research Center, Center for Integrative and Cognitive Neuroscience, Vanderbilt Brain Institute, Vanderbilt University, Wilson Hall, 111 21(st) Avenue South, Nashville, TN 37325, USA.
| |
Collapse
|
17
|
Zhang R, Ballard DH. Parallel Neural Multiprocessing with Gamma Frequency Latencies. Neural Comput 2020; 32:1635-1663. [PMID: 32687771 DOI: 10.1162/neco_a_01301] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The Poisson variability in cortical neural responses has been typically modeled using spike averaging techniques, such as trial averaging and rate coding, since such methods can produce reliable correlates of behavior. However, mechanisms that rely on counting spikes could be slow and inefficient and thus might not be useful in the brain for computations at timescales in the 10 millisecond range. This issue has motivated a search for alternative spike codes that take advantage of spike timing and has resulted in many studies that use synchronized neural networks for communication. Here we focus on recent studies that suggest that the gamma frequency may provide a reference that allows local spike phase representations that could result in much faster information transmission. We have developed a unified model (gamma spike multiplexing) that takes advantage of a single cycle of a cell's somatic gamma frequency to modulate the generation of its action potentials. An important consequence of this coding mechanism is that it allows multiple independent neural processes to run in parallel, thereby greatly increasing the processing capability of the cortex. System-level simulations and preliminary analysis of mouse cortical cell data are presented as support for the proposed theoretical model.
Collapse
Affiliation(s)
- Ruohan Zhang
- Department of Computer Science, University of Texas at Austin, Austin, TX 78712, U.S.A.
| | - Dana H Ballard
- Department of Computer Science, University of Texas at Austin, Austin, TX 78712, U.S.A.
| |
Collapse
|
18
|
Seijdel N, Tsakmakidis N, de Haan EHF, Bohte SM, Scholte HS. Depth in convolutional neural networks solves scene segmentation. PLoS Comput Biol 2020; 16:e1008022. [PMID: 32706770 PMCID: PMC7406083 DOI: 10.1371/journal.pcbi.1008022] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 08/05/2020] [Accepted: 06/06/2020] [Indexed: 01/25/2023] Open
Abstract
Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or "binding" features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.
Collapse
Affiliation(s)
- Noor Seijdel
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Nikos Tsakmakidis
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, the Netherlands
| | - Edward H. F. de Haan
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Sander M. Bohte
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, the Netherlands
| | - H. Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
19
|
Martin JG, Davis CE, Riesenhuber M, Thorpe SJ. Microsaccades during high speed continuous visual search. J Eye Mov Res 2020; 13:10.16910/jemr.13.5.4. [PMID: 33828809 PMCID: PMC8009256 DOI: 10.16910/jemr.13.5.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background. Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes. As soon as the participant's gaze reached the target face, a new face was displayed in a different and random location. Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset. There were almost never any microsaccades after stimulus onset and before the first saccade to the face. One subject completed 118 consecutive trials without a single microsaccade. However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade's offset. These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions. Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search. In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.
Collapse
Affiliation(s)
- Jacob G Martin
- CNRS Center for Brain and Cognition Research (CerCo), Toulouse, France
| | - Charles E Davis
- CNRS Center for Brain and Cognition Research (CerCo), Toulouse, France
| | | | - Simon J Thorpe
- CNRS Center for Brain and Cognition Research (CerCo), Toulouse, France
| |
Collapse
|
20
|
Xie X, Liu G, Cai Q, Sun G, Zhang M, Qu H. An end-to-end functional spiking model for sequential feature learning. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
21
|
Smith ME, Loschky LC. The influence of sequential predictions on scene-gist recognition. J Vis 2020; 19:14. [PMID: 31622473 DOI: 10.1167/19.12.14] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Past research suggests that recognizing scene gist, a viewer's holistic semantic representation of a scene acquired within a single eye fixation, involves purely feed-forward mechanisms. We investigated whether expectations can influence scene categorization. To do this, we embedded target scenes in more ecologically valid, first-person-viewpoint image sequences, along spatiotemporally connected routes (e.g., an office to a parking lot). We manipulated the sequences' spatiotemporal coherence by presenting them either coherently or in random order. Participants identified the category of one target scene in a 10-scene-image rapid serial visual presentation. Categorization accuracy was greater for targets in coherent sequences. Accuracy was also greater for targets with more visually similar primes. In Experiment 2, we investigated whether targets in coherent sequences were more predictable and whether predictable images were identified more accurately in Experiment 1 after accounting for the effect of prime-to-target visual similarity. To do this, we removed targets and had participants predict the category of the missing scene. Images were more accurately predicted in coherent sequences, and both image predictability and prime-to-target visual similarity independently contributed to performance in Experiment 1. To test whether prediction-based facilitation effects were solely due to response bias, participants performed a two-alternative forced-choice task in which they indicated whether the target was an intact or a phase-randomized scene. Critically, predictability of the target category was irrelevant to this task. Nevertheless, results showed that sensitivity, but not response bias, was greater for targets in coherent sequences. Predictions made prior to viewing a scene facilitate scene-gist recognition.
Collapse
Affiliation(s)
- Maverick E Smith
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| |
Collapse
|
22
|
Paradiso MA, Akers-Campbell S, Ruiz O, Niemeyer JE, Geman S, Loper J. Transsacadic Information and Corollary Discharge in Local Field Potentials of Macaque V1. Front Integr Neurosci 2019; 12:63. [PMID: 30692920 PMCID: PMC6340263 DOI: 10.3389/fnint.2018.00063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 12/11/2018] [Indexed: 01/08/2023] Open
Abstract
Approximately three times per second, human visual perception is interrupted by a saccadic eye movement. In addition to taking the eyes to a new location, several lines of evidence suggest that the saccades play multiple roles in visual perception. Indeed, it may be crucial that visual processing is informed about movements of the eyes in order to analyze visual input distinctly and efficiently on each fixation and preserve stable visual perception of the world across saccades. A variety of studies has demonstrated that activity in multiple brain areas is modulated by saccades. The hypothesis tested here is that these signals carry significant information that could be used in visual processing. To test this hypothesis, local field potentials (LFPs) were simultaneously recorded from multiple electrodes in macaque primary visual cortex (V1); support vector machines (SVMs) were used to classify the peri-saccadic LFPs. We find that LFPs in area V1 carry information that can be used to distinguish neural activity associated with fixations from saccades, precisely estimate the onset time of fixations, and reliably infer the directions of saccades. This information may be used by the brain in processes including visual stability, saccadic suppression, receptive field (RF) remapping, fixation amplification, and trans-saccadic visual perception.
Collapse
Affiliation(s)
- Michael A Paradiso
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Seth Akers-Campbell
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Octavio Ruiz
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - James E Niemeyer
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Stuart Geman
- Department of Applied Mathematics, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Jackson Loper
- Department of Applied Mathematics, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| |
Collapse
|
23
|
Groen IIA, Jahfari S, Seijdel N, Ghebreab S, Lamme VAF, Scholte HS. Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Comput Biol 2018; 14:e1006690. [PMID: 30596644 PMCID: PMC6329519 DOI: 10.1371/journal.pcbi.1006690] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Revised: 01/11/2019] [Accepted: 12/01/2018] [Indexed: 02/06/2023] Open
Abstract
Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.
Collapse
Affiliation(s)
- Iris I. A. Groen
- New York University, Department of Psychology, New York, New York, United States of America
| | - Sara Jahfari
- Spinoza Centre for Neuroimaging, Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands
- University of Amsterdam, Department of Psychology, Section Brain and Cognition, Amsterdam, The Netherlands
| | - Noor Seijdel
- University of Amsterdam, Department of Psychology, Section Brain and Cognition, Amsterdam, The Netherlands
| | - Sennay Ghebreab
- University of Amsterdam, Department of Psychology, Section Brain and Cognition, Amsterdam, The Netherlands
- University of Amsterdam, Department of Informatics, Intelligent Systems Lab, Amsterdam, The Netherlands
| | - Victor A. F. Lamme
- University of Amsterdam, Department of Psychology, Section Brain and Cognition, Amsterdam, The Netherlands
| | - H. Steven Scholte
- University of Amsterdam, Department of Psychology, Section Brain and Cognition, Amsterdam, The Netherlands
| |
Collapse
|
24
|
Regev TI, Winawer J, Gerber EM, Knight RT, Deouell LY. Human posterior parietal cortex responds to visual stimuli as early as peristriate occipital cortex. Eur J Neurosci 2018; 48:3567-3582. [PMID: 30240547 PMCID: PMC6482330 DOI: 10.1111/ejn.14164] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 08/24/2018] [Accepted: 09/07/2018] [Indexed: 11/30/2022]
Abstract
Much of what is known about the timing of visual processing in the brain is inferred from intracranial studies in monkeys, with human data limited to mainly noninvasive methods with lower spatial resolution. Here, we estimated visual onset latencies from electrocorticographic (ECoG) recordings in a patient who was implanted with 112 subdural electrodes, distributed across the posterior cortex of the right hemisphere, for presurgical evaluation of intractable epilepsy. Functional MRI prior to surgery was used to determine boundaries of visual areas. The patient was presented with images of objects from several categories. Event-related potentials (ERPs) were calculated across all categories excluding targets, and statistically reliable onset latencies were determined, using a bootstrapping procedure over the single trial baseline activity in individual electrodes. The distribution of onset latencies broadly reflected the known hierarchy of visual areas, with the earliest cortical responses in primary visual cortex, and higher areas showing later responses. A clear exception to this pattern was a robust, statistically reliable and spatially localized, very early response, on the bank of the posterior intraparietal sulcus (IPS). The response in the IPS started nearly simultaneously with responses detected in peristriate visual areas, around 60 ms poststimulus onset. Our results support the notion of early visual processing in the posterior parietal lobe, not respecting traditional hierarchies, and give direct evidence for onset times of visual responses across the human cortex.
Collapse
Affiliation(s)
- Tamar I. Regev
- Edmond and Lily Safra Center for Brain Science, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York, USA
| | - Edden M. Gerber
- Edmond and Lily Safra Center for Brain Science, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Robert T. Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA
| | - Leon Y. Deouell
- Edmond and Lily Safra Center for Brain Science, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
25
|
Mozafari M, Kheradpisheh SR, Masquelier T, Nowzari-Dalini A, Ganjtabesh M. First-Spike-Based Visual Categorization Using Reward-Modulated STDP. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:6178-6190. [PMID: 29993898 DOI: 10.1109/tnnls.2018.2826721] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. If this assumption was correct, the neuron was rewarded, i.e., spike-timing-dependent plasticity (STDP) was applied, which reinforced the neuron's selectivity. Otherwise, anti-STDP was applied, which encouraged the neuron to learn something else. As demonstrated on various image data sets (Caltech, ETH-80, and NORB), this reward-modulated STDP (R-STDP) approach has extracted particularly discriminative visual features, whereas classic unsupervised STDP extracts any feature that consistently repeats. As a result, R-STDP has outperformed STDP on these data sets. Furthermore, R-STDP is suitable for online learning and can adapt to drastic changes such as label permutations. Finally, it is worth mentioning that both feature extraction and classification were done with spikes, using at most one spike per neuron. Thus, the network is hardware friendly and energy efficient.
Collapse
|
26
|
Greene E. New encoding concepts for shape recognition are needed. AIMS Neurosci 2018; 5:162-178. [PMID: 32341959 PMCID: PMC7179345 DOI: 10.3934/neuroscience.2018.3.162] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Accepted: 02/26/2018] [Indexed: 11/18/2022] Open
Abstract
Models designed to explain how shapes are perceived and stored by the nervous system commonly emphasize encoding of contour features, especially orientation, curvature, and linear extent. A number of experiments from my laboratory provide evidence that contours deliver a multitude of location markers, and shapes can be identified when relatively few of the markers are displayed. The emphasis on filtering for orientation and other contour features has directed attention away from full and effective examination of how the location information is registered and used for summarizing shapes. Neural network (connectionist) models try to deal with location information by modifying linkage among neuronal populations through training trials. Connections that are initially diffuse and not useful in achieving recognition get eliminated or changed in strength, resulting in selective response to a given shape. But results from my laboratory, reviewed here, demonstrate that unknown shapes that are displayed only once can be identified using a matching task. These findings show that our visual system can immediately encode shape information with no requirement for training trials. This encoding might be accomplished by neuronal circuits in the retina.
Collapse
Affiliation(s)
- Ernest Greene
- Laboratory for Neurometric Research, Department of Psychology, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
27
|
Hegdé J. Neural Mechanisms of High-Level Vision. Compr Physiol 2018; 8:903-953. [PMID: 29978891 DOI: 10.1002/cphy.c160035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The last three decades have seen major strides in our understanding of neural mechanisms of high-level vision, or visual cognition of the world around us. Vision has also served as a model system for the study of brain function. Several broad insights, as yet incomplete, have recently emerged. First, visual perception is best understood not as an end unto itself, but as a sensory process that subserves the animal's behavioral goal at hand. Visual perception is likely to be simply a side effect that reflects the readout of visual information processing that leads to behavior. Second, the brain is essentially a probabilistic computational system that produces behaviors by collectively evaluating, not necessarily consciously or always optimally, the available information about the outside world received from the senses, the behavioral goals, prior knowledge about the world, and possible risks and benefits of a given behavior. Vision plays a prominent role in the overall functioning of the brain providing the lion's share of information about the outside world. Third, the visual system does not function in isolation, but rather interacts actively and reciprocally with other brain systems, including other sensory faculties. Finally, various regions of the visual system process information not in a strict hierarchical manner, but as parts of various dynamic brain-wide networks, collectively referred to as the "connectome." Thus, a full understanding of vision will ultimately entail understanding, in granular, quantitative detail, various aspects of dynamic brain networks that use visual sensory information to produce behavior under real-world conditions. © 2017 American Physiological Society. Compr Physiol 8:903-953, 2018.
Collapse
Affiliation(s)
- Jay Hegdé
- Brain and Behavior Discovery Institute, Augusta University, Augusta, Georgia, USA.,James and Jean Culver Vision Discovery Institute, Augusta University, Augusta, Georgia, USA.,Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, Georgia, USA.,The Graduate School, Augusta University, Augusta, Georgia, USA
| |
Collapse
|
28
|
Nordberg H, Hautus MJ, Greene E. Visual encoding of partial unknown shape boundaries. AIMS Neurosci 2018; 5:132-147. [PMID: 32341957 PMCID: PMC7181889 DOI: 10.3934/neuroscience.2018.2.132] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 05/10/2018] [Indexed: 12/21/2022] Open
Abstract
Prior research has found that known shapes and letters can be recognized from a sparse sampling of dots that mark locations on their boundaries. Further, unknown shapes that are displayed only once can be identified by a matching protocol, and here also, above-chance performance requires very few boundary markers. The present work examines whether partial boundaries can be identified under similar low-information conditions. Several experiments were conducted that used a match-recognition task, with initial display of a target shape followed quickly by a comparison shape. The comparison shape was either derived from the target shape or was based on a different shape, and the respondent was asked for a matching judgment, i.e., did it "match" the target shape. Stimulus treatments included establishing how density affected the probability of a correct decision, followed by assessment of how much positioning of boundary dots affected this probability. Results indicate that correct judgments were possible when partial boundaries were displayed with a sparse sampling of dots. We argue for a process that quickly registers the locations of boundary markers and distills that information into a shape summary that can be used to identify the shape even when only a portion of the boundary is represented.
Collapse
Affiliation(s)
- Hannah Nordberg
- Department of Psychology, University of Southern California, Los Angeles, California USA
| | - Michael J Hautus
- The School of Psychology, University of Auckland, Auckland New Zealand, California USA
| | - Ernest Greene
- Department of Psychology, University of Southern California, Los Angeles, California USA
| |
Collapse
|
29
|
Maxfield ND. Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2906-2923. [PMID: 28973156 PMCID: PMC5945065 DOI: 10.1044/2017_jslhr-l-16-0309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Revised: 04/07/2017] [Accepted: 06/11/2017] [Indexed: 06/02/2023]
Abstract
PURPOSE Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation. METHOD Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go-NoGo/left-right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes. RESULTS NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding. CONCLUSION NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding.
Collapse
Affiliation(s)
- Nathan D. Maxfield
- Department of Communication Sciences and Disorders, University of South Florida, Tampa
| |
Collapse
|
30
|
Birznieks I, Vickery RM. Spike Timing Matters in Novel Neuronal Code Involved in Vibrotactile Frequency Perception. Curr Biol 2017; 27:1485-1490.e2. [PMID: 28479322 DOI: 10.1016/j.cub.2017.04.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 03/09/2017] [Accepted: 04/10/2017] [Indexed: 11/16/2022]
Abstract
Skin vibrations sensed by tactile receptors contribute significantly to the perception of object properties during tactile exploration [1-4] and to sensorimotor control during object manipulation [5]. Sustained low-frequency skin vibration (<60 Hz) evokes a distinct tactile sensation referred to as flutter whose frequency can be clearly perceived [6]. How afferent spiking activity translates into the perception of frequency is still unknown. Measures based on mean spike rates of neurons in the primary somatosensory cortex are sufficient to explain performance in some frequency discrimination tasks [7-11]; however, there is emerging evidence that stimuli can be distinguished based also on temporal features of neural activity [12, 13]. Our study's advance is to demonstrate that temporal features are fundamental for vibrotactile frequency perception. Pulsatile mechanical stimuli were used to elicit specified temporal spike train patterns in tactile afferents, and subsequently psychophysical methods were employed to characterize human frequency perception. Remarkably, the most salient temporal feature determining vibrotactile frequency was not the underlying periodicity but, rather, the duration of the silent gap between successive bursts of neural activity. This burst gap code for frequency represents a previously unknown form of neural coding in the tactile sensory system, which parallels auditory pitch perception mechanisms based on purely temporal information where longer inter-pulse intervals receive higher perceptual weights than short intervals [14]. Our study also demonstrates that human perception of stimuli can be determined exclusively by temporal features of spike trains independent of the mean spike rate and without contribution from population response factors.
Collapse
Affiliation(s)
- Ingvars Birznieks
- School of Medical Sciences, Faculty of Medicine, UNSW Sydney, Sydney, NSW 2052, Australia; Neuroscience Research Australia, Barker Street, Randwick, NSW 2031, Australia.
| | - Richard M Vickery
- School of Medical Sciences, Faculty of Medicine, UNSW Sydney, Sydney, NSW 2052, Australia; Neuroscience Research Australia, Barker Street, Randwick, NSW 2031, Australia
| |
Collapse
|
31
|
Pitti A, Gaussier P, Quoy M. Iterative free-energy optimization for recurrent neural networks (INFERNO). PLoS One 2017; 12:e0173684. [PMID: 28282439 PMCID: PMC5345841 DOI: 10.1371/journal.pone.0173684] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Accepted: 02/24/2017] [Indexed: 11/19/2022] Open
Abstract
The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.
Collapse
Affiliation(s)
- Alexandre Pitti
- ETIS Laboratory, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Paris-Seine, Cergy-Pontoise, France
| | - Philippe Gaussier
- ETIS Laboratory, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Paris-Seine, Cergy-Pontoise, France
| | - Mathias Quoy
- ETIS Laboratory, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Paris-Seine, Cergy-Pontoise, France
| |
Collapse
|
32
|
Abstract
What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.
Collapse
|
33
|
Abstract
Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization.
Collapse
|
34
|
Pitti A, Pugach G, Gaussier P, Shimada S. Spatio-Temporal Tolerance of Visuo-Tactile Illusions in Artificial Skin by Recurrent Neural Network with Spike-Timing-Dependent Plasticity. Sci Rep 2017; 7:41056. [PMID: 28106139 PMCID: PMC5247701 DOI: 10.1038/srep41056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 12/16/2016] [Indexed: 12/15/2022] Open
Abstract
Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200-300 ms is the critical limit of the brain to represent one's own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.
Collapse
Affiliation(s)
- Alexandre Pitti
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Ganna Pugach
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France.,Energy and Metallurgy Department, Donetsk National Technical University, Krasnoarmeysk, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Sotaro Shimada
- Dept. of Electronics and Bioinformatics, School of Science and Technology, Meiji University, Kawasaki, Japan
| |
Collapse
|
35
|
Liu Q, Pineda-García G, Stromatias E, Serrano-Gotarredona T, Furber SB. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation. Front Neurosci 2016; 10:496. [PMID: 27853419 PMCID: PMC5090001 DOI: 10.3389/fnins.2016.00496] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 10/17/2016] [Indexed: 11/13/2022] Open
Abstract
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
Collapse
Affiliation(s)
- Qian Liu
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| | - Garibaldi Pineda-García
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| | | | | | - Steve B. Furber
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| |
Collapse
|
36
|
Onken A, Liu JK, Karunasekara PPCR, Delis I, Gollisch T, Panzeri S. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains. PLoS Comput Biol 2016; 12:e1005189. [PMID: 27814363 PMCID: PMC5096699 DOI: 10.1371/journal.pcbi.1005189] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 10/11/2016] [Indexed: 11/21/2022] Open
Abstract
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Collapse
Affiliation(s)
- Arno Onken
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Jian K. Liu
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - P. P. Chamanthi R. Karunasekara
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, New York, United States of America
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| |
Collapse
|
37
|
Engelmann J, Walther T, Grant K, Chicca E, Gómez-Sena L. Modeling latency code processing in the electric sense: from the biological template to its VLSI implementation. BIOINSPIRATION & BIOMIMETICS 2016; 11:055007. [PMID: 27623047 DOI: 10.1088/1748-3190/11/5/055007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Understanding the coding of sensory information under the temporal constraints of natural behavior is not yet well resolved. There is a growing consensus that spike timing or latency coding can maximally exploit the timing of neural events to make fast computing elements and that such mechanisms are essential to information processing functions in the brain. The electric sense of mormyrid fish provides a convenient biological model where this coding scheme can be studied. The sensory input is a physically ordered spatial pattern of current densities, which is coded in the precise timing of primary afferent spikes. The neural circuits of the processing pathway are well known and the system exhibits the best known illustration of corollary discharge, which provides the reference to decoding the sensory afferent latency pattern. A theoretical model has been constructed from available electrophysiological and neuroanatomical data to integrate the principal traits of the neural processing structure and to study sensory interaction with motor-command-driven corollary discharge signals. This has been used to explore neural coding strategies at successive stages in the network and to examine the simulated network capacity to reproduce output neuron responses. The model shows that the network has the ability to resolve primary afferent spike timing differences in the sub-millisecond range, and that this depends on the coincidence of sensory and corollary discharge-driven gating signals. In the integrative and output stages of the network, corollary discharge sets up a proactive background filter, providing temporally structured excitation and inhibition within the network whose balance is then modulated locally by sensory input. This complements the initial gating mechanism and contributes to amplification of the input pattern of latencies, conferring network hyperacuity. These mechanisms give the system a robust capacity to extract behaviorally meaningful features of the electric image with high sensitivity over a broad working range. Since the network largely depends on spike timing, we finally discuss its suitability for implementation in robotic applications based on neuromorphic hardware.
Collapse
Affiliation(s)
- Jacob Engelmann
- Bielefeld University, Faculty of Biology/CITEC, AG Active Sensing, Universitätsstraße 25, 33615 Bielefeld, Germany
| | | | | | | | | |
Collapse
|
38
|
A cognitive architecture account of the visual local advantage phenomenon in autism spectrum disorders. Vision Res 2016; 126:278-290. [DOI: 10.1016/j.visres.2015.04.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Revised: 02/17/2015] [Accepted: 04/14/2015] [Indexed: 11/24/2022]
|
39
|
There Is a "U" in Clutter: Evidence for Robust Sparse Codes Underlying Clutter Tolerance in Human Vision. J Neurosci 2016; 35:14148-59. [PMID: 26490856 DOI: 10.1523/jneurosci.1211-15.2015] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
UNLABELLED The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter. SIGNIFICANCE STATEMENT The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter.
Collapse
|
40
|
Miller KJ, Schalk G, Hermes D, Ojemann JG, Rao RPN. Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change. PLoS Comput Biol 2016; 12:e1004660. [PMID: 26820899 PMCID: PMC4731148 DOI: 10.1371/journal.pcbi.1004660] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2015] [Accepted: 11/17/2015] [Indexed: 11/19/2022] Open
Abstract
The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject's perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject's perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing.
Collapse
Affiliation(s)
- Kai J. Miller
- Departments of Neurosurgery, Stanford University, Stanford, California, United States of America
- NASA—Johnson Space Center, Houston, Texas, United States of America
- Program in Neurobiology and Behavior, University of Washington, Seattle, Washington, United States of America
| | - Gerwin Schalk
- National Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health, Albany, New York, United States of America
| | - Dora Hermes
- Psychology, Stanford University, Stanford, California, United States of America
| | - Jeffrey G. Ojemann
- Program in Neurobiology and Behavior, University of Washington, Seattle, Washington, United States of America
- Department of Neurological Surgery, University of Washington, Seattle, Washington, United States of America
- Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington, United States of America
| | - Rajesh P. N. Rao
- Program in Neurobiology and Behavior, University of Washington, Seattle, Washington, United States of America
- Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington, United States of America
- Computer Science and Engineering, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
41
|
|
42
|
|
43
|
|
44
|
Clarke A. Dynamic information processing states revealed through neurocognitive models of object semantics. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:409-419. [PMID: 25745632 PMCID: PMC4337742 DOI: 10.1080/23273798.2014.970652] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations - a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time.
Collapse
Affiliation(s)
- Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
45
|
Evans BD, Stringer SM. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli. BIOLOGICAL CYBERNETICS 2015; 109:215-239. [PMID: 25488769 PMCID: PMC4366549 DOI: 10.1007/s00422-014-0637-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Accepted: 10/28/2014] [Indexed: 06/04/2023]
Abstract
Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.
Collapse
Affiliation(s)
- Benjamin D Evans
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK,
| | | |
Collapse
|
46
|
Exploiting the gain-modulation mechanism in parieto-motor neurons: Application to visuomotor transformations and embodied simulation. Neural Netw 2015; 62:102-11. [DOI: 10.1016/j.neunet.2014.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 01/29/2023]
|
47
|
Ray S, Maunsell JH. Do gamma oscillations play a role in cerebral cortex? Trends Cogn Sci 2015; 19:78-85. [PMID: 25555444 PMCID: PMC5403517 DOI: 10.1016/j.tics.2014.12.002] [Citation(s) in RCA: 166] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2014] [Revised: 11/22/2014] [Accepted: 12/01/2014] [Indexed: 01/13/2023]
Abstract
Gamma rhythm (which has a center frequency between 30 and 80 Hz) is modulated by cognitive mechanisms such as attention and memory, and has been hypothesized to play a role in mediating these processes by supporting communication channels between cortical areas or encoding information in its phase. We highlight several issues related to gamma rhythms, such as low and inconsistent power, its dependence on low-level stimulus features, problems due to conduction delays, and contamination due to spike-related activity that makes accurate estimation of gamma phase difficult. Gamma rhythm could be a potentially useful signature of excitation-inhibition interactions in the brain, but whether it also provides a mechanism for information processing or coding remains an open question.
Collapse
Affiliation(s)
- Supratim Ray
- Centre for Neuroscience Indian Institute of Science, Bangalore, India, 560012 Phone: +918022933437
| | - John H.R. Maunsell
- Department of Neurobiology University of Chicago 5812 S Ellis Avenue, MC0912 Chicago, IL 60637 USA
| |
Collapse
|
48
|
de Froment AJ, Rubenstein DI, Levin SA. An extra dimension to decision-making in animals: the three-way trade-off between speed, effort per-unit-time and accuracy. PLoS Comput Biol 2014; 10:e1003937. [PMID: 25522281 PMCID: PMC4270426 DOI: 10.1371/journal.pcbi.1003937] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Accepted: 09/26/2014] [Indexed: 11/26/2022] Open
Abstract
The standard view in biology is that all animals, from bumblebees to human beings, face a trade-off between speed and accuracy as they search for resources and mates, and attempt to avoid predators. For example, the more time a forager spends out of cover gathering information about potential food sources the more likely it is to make accurate decisions about which sources are most rewarding. However, when the cost of time spent out of cover rises (e.g. in the presence of a predator) the optimal strategy is for the forager to spend less time gathering information and to accept a corresponding decline in the accuracy of its decisions. We suggest that this familiar picture is missing a crucial dimension: the amount of effort an animal expends on gathering information in each unit of time. This is important because an animal that can respond to changing time costs by modulating its level of effort per-unit-time does not have to accept the same decrease in accuracy that an animal limited to a simple speed-accuracy trade-off must bear in the same situation. Instead, it can direct additional effort towards (i) reducing the frequency of perceptual errors in the samples it gathers or (ii) increasing the number of samples it gathers per-unit-time. Both of these have the effect of allowing it to gather more accurate information within a given period of time. We use a modified version of a canonical model of decision-making (the sequential probability ratio test) to show that this ability to substitute effort for time confers a fitness advantage in the face of changing time costs. We predict that the ability to modulate effort levels will therefore be widespread in nature, and we lay out testable predictions that could be used to detect adaptive modulation of effort levels in laboratory and field studies. Our understanding of decision-making in all species, including our own, will be improved by this more ecologically-complete picture of the three-way tradeoff between time, effort per-unit-time and accuracy. Efficient decision-making is vital to the lives of all animals, but the underlying principles of how they achieve this are not yet fully understood. Researchers studying decision-making have generally assumed that animals balance a two-way trade-off between speed and accuracy: the more time they spend gathering information, the more accurate their decisions will be, but the greater the cost they have to pay. We suggest that this picture is missing a crucial component: the effort that animals spend on gathering information within each unit of time. This is important because an animal that can change the amount of effort it invests per-unit-time can use this ability to maintain the accuracy of its decisions even when it reduces the amount of time it spends on them, and can therefore gain a fitness advantage. We predict that this ability to change effort levels should therefore be widespread in nature. This updated view of a three-way trade-off between speed, effort per-unit-time and accuracy will help behavioral ecologists, neuroscientists, economists and psychologists to understand decision-making better, and may also lead to the development of more efficient control algorithms for robot decision-makers.
Collapse
Affiliation(s)
- Adrian J. de Froment
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| | - Daniel I. Rubenstein
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Simon A. Levin
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
49
|
Brand J, Johnson AP. Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization. Front Psychol 2014; 5:1274. [PMID: 25520675 PMCID: PMC4251296 DOI: 10.3389/fpsyg.2014.01274] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Accepted: 10/20/2014] [Indexed: 11/18/2022] Open
Abstract
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.
Collapse
Affiliation(s)
- John Brand
- Department of Psychology, Concordia University Montreal, QC, Canada
| | - Aaron P Johnson
- Department of Psychology, Concordia University Montreal, QC, Canada ; Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal Montreal, QC, Canada
| |
Collapse
|
50
|
Leffel T, Lauter M, Westerlund M, Pylkkänen L. Restrictive vs. non-restrictive composition: a magnetoencephalography study. LANGUAGE, COGNITION AND NEUROSCIENCE 2014; 29:1191-1204. [PMID: 25379512 PMCID: PMC4205928 DOI: 10.1080/23273798.2014.956765] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2013] [Accepted: 08/06/2014] [Indexed: 06/04/2023]
Abstract
Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors.
Collapse
Affiliation(s)
- Timothy Leffel
- Department of Linguistics, New York University, 10 Washington Place, New York, NY10003, USA
| | - Miriam Lauter
- Department of Linguistics, New York University, 10 Washington Place, New York, NY10003, USA
- Department of Psychology, New York University, 6 Washington Place, New York, NY10003, USA
- NYUAD Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Masha Westerlund
- Department of Psychology, New York University, 6 Washington Place, New York, NY10003, USA
- NYUAD Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Liina Pylkkänen
- Department of Linguistics, New York University, 10 Washington Place, New York, NY10003, USA
- Department of Psychology, New York University, 6 Washington Place, New York, NY10003, USA
- NYUAD Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| |
Collapse
|