1
|
White PA. The perceptual timescape: Perceptual history on the sub-second scale. Cogn Psychol 2024; 149:101643. [PMID: 38452720 DOI: 10.1016/j.cogpsych.2024.101643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/09/2024]
Abstract
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, Wales CF10 3YG, United Kingdom.
| |
Collapse
|
2
|
Burr DC, Morrone MC. The role of neural oscillations in visuo-motor communication at the time of saccades. Neuropsychologia 2023; 190:108682. [PMID: 37717722 DOI: 10.1016/j.neuropsychologia.2023.108682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 09/08/2023] [Accepted: 09/11/2023] [Indexed: 09/19/2023]
Abstract
Saccadic eye-movements are fundamental for active vision, allowing observers to purposefully scan the environment with the high-resolution fovea. In this brief perspective we outline a series of experiments from our laboratories investigating the role of eye-movements and their consequences to active perception. We show that saccades lead to suppression of visual sensitivity at saccadic onset, and that this suppression is accompanied by endogenous neural oscillations in the delta range. Similar oscillations are initiated by purposeful hand movements, which lead to measurable changes in responsivity in area V1, and in the connectivity with motor area M1. Saccades also lead to clear distortions in apparent position, but only for verbal reports, not when participants respond with rapid pointing, consistent with the action of two separate visual systems in neurotypical adults. At the time of saccades, serial dependence, the positive influence on perception of previous stimulus attributes (such as orientation) is particularly strong. Again, these processes are accompanied by neural oscillations, in the alpha and low beta range. In general, oscillations seem to be tightly linked to serial dependence in perception, both in auditory judgments (around 10 Hz), and for visual judgements of face gender (14 Hz for female, 17 Hz for male). Taken together, the studies show that neural oscillations play a fundamental role in dynamic, active vision.
Collapse
Affiliation(s)
- David C Burr
- Department of Neuroscience, Psychology, Pharmacology, and Child Health, University of Florence, 50135, Florence, Italy; School of Psychology, University of Sydney, Australia.
| | - Maria Concetta Morrone
- Department of Neuroscience, Psychology, Pharmacology, and Child Health, University of Florence, 50135, Florence, Italy; Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123, Pisa, Italy
| |
Collapse
|
3
|
Heins F, Masselink J, Scherer JN, Lappe M. Adaptive changes to saccade amplitude and target localization do not require pre-saccadic target visibility. Sci Rep 2023; 13:8315. [PMID: 37221275 DOI: 10.1038/s41598-023-35434-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 05/17/2023] [Indexed: 05/25/2023] Open
Abstract
The accuracy of saccadic eye movements is maintained by saccadic adaptation, a learning mechanism that is proposed to rely on visual prediction error, i.e., a mismatch between the pre-saccadically predicted and post-saccadically experienced position of the saccade target. However, recent research indicates that saccadic adaptation might be driven by postdictive motor error, i.e., a retrospective estimation of the pre-saccadic target position based on the post-saccadic image. We investigated whether oculomotor behavior can be adapted based on post-saccadic target information alone. We measured eye movements and localization judgements as participants aimed saccades at an initially invisible target, which was always shown only after the saccade. Each such trial was followed by either a pre- or a post-saccadic localization trial. The target position was fixed for the first 100 trials of the experiment and, during the following 200 trials, successively shifted inward or outward. Saccade amplitude and the pre- and post-saccadic localization judgements adjusted to the changing target position. Our results suggest that post-saccadic information is sufficient to induce error-reducing adaptive changes in saccade amplitude and target localization, possibly reflecting continuous updating of the estimated pre-saccadic target location driven by postdictive motor error.
Collapse
Affiliation(s)
- Frauke Heins
- Institute for Psychology, University of Münster, 48149, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany.
| | - Jana Masselink
- Institute for Psychology, University of Münster, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
| | | | - Markus Lappe
- Institute for Psychology, University of Münster, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149, Münster, Germany
| |
Collapse
|
4
|
Bansal S, Joiner WM. Transsaccadic visual perception of foveal compared to peripheral environmental changes. J Vis 2021; 21:12. [PMID: 34160578 PMCID: PMC8237106 DOI: 10.1167/jov.21.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The maintenance of stable visual perception across eye movements is hypothesized to be aided by extraretinal information (e.g., corollary discharge [CD]). Previous studies have focused on the benefits of this information for perception at the fovea. However, there is little information on the extent that CD benefits peripheral visual perception. Here we systematically examined the extent that CD supports the ability to perceive transsaccadic changes at the fovea compared to peripheral changes. Human subjects made saccades to targets positioned at different amplitudes (4° or 8°) and directions (rightward or upward). On each trial there was a reference point located either at (fovea) or 4° away (periphery) from the target. During the saccade the target and reference disappeared and, after a blank period, the reference reappeared at a shifted location. Subjects reported the perceived shift direction, and we determined the perceptual threshold for detection and estimate of the reference location. We also simulated the detection and location if subjects solely relied on the visual error of the shifted reference experienced after the saccade. The comparison of the reference location under these two conditions showed that overall the perceptual estimate was approximately 53% more accurate and 30% less variable than estimates based solely on visual information at the fovea. These values for peripheral shifts were consistently lower than that at the fovea: 34% more accurate and 9% less variable. Overall, the results suggest that CD information does support stable visual perception in the periphery, but is consistently less beneficial compared to the fovea.
Collapse
Affiliation(s)
- Sonia Bansal
- Department of Neuroscience, George Mason University, Fairfax, VA, USA.,Maryland Psychiatric Research Center, Department of Psychiatry, University of Maryland School of Medicine, Baltimore, MD, USA.,
| | - Wilsaan M Joiner
- Department of Bioengineering, George Mason University, Fairfax, VA, USA.,Department of Neurobiology, Physiology and Behavior, University of California Davis, Davis, CA, USA.,Department of Neurology, University of California Davis, Davis, CA, USA.,
| |
Collapse
|
5
|
Masselink J, Lappe M. Visuomotor learning from postdictive motor error. eLife 2021; 10:64278. [PMID: 33687328 PMCID: PMC8057815 DOI: 10.7554/elife.64278] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 03/04/2021] [Indexed: 01/02/2023] Open
Abstract
Sensorimotor learning adapts motor output to maintain movement accuracy. For saccadic eye movements, learning also alters space perception, suggesting a dissociation between the performed saccade and its internal representation derived from corollary discharge (CD). This is critical since learning is commonly believed to be driven by CD-based visual prediction error. We estimate the internal saccade representation through pre- and trans-saccadic target localization, showing that it decouples from the actual saccade during learning. We present a model that explains motor and perceptual changes by collective plasticity of spatial target percept, motor command, and a forward dynamics model that transforms CD from motor into visuospatial coordinates. We show that learning does not follow visual prediction error but instead a postdictive update of space after saccade landing. We conclude that trans-saccadic space perception guides motor learning via CD-based postdiction of motor error under the assumption of a stable world.
Collapse
Affiliation(s)
- Jana Masselink
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| |
Collapse
|
6
|
Abstract
Humans are able to integrate pre- and postsaccadic percepts of an object across saccades to maintain perceptual stability. Previous studies have used Maximum Likelihood Estimation (MLE) to determine that integration occurs in a near-optimal manner. Here, we compared three different models to investigate the mechanism of integration in more detail: an early noise model, where noise is added to the pre- and postsaccadic signals before integration occurs; a late-noise model, where noise is added to the integrated signal after integration occurs; and a temporal summation model, where integration benefits arise from the longer transsaccadic presentation duration compared to pre- and postsaccadic presentation only. We also measured spatiotemporal aspects of integration to determine whether integration can occur for very brief stimulus durations, across two hemifields, and in spatiotopic and retinotopic coordinates. Pre-, post-, and transsaccadic performance was measured at different stimulus presentation durations, both at the saccade target and a location where the pre- and postsaccadic stimuli were presented in different hemifields across the saccade. Results showed that for both within- and between-hemifields conditions, integration could occur when pre- and postsaccadic stimuli were presented only briefly, and that the pattern of integration followed an early noise model. Whereas integration occurred when the pre- and post-saccadic stimuli were presented in the same spatiotopic coordinates, there was no integration when they were presented in the same retinotopic coordinates. This contrast suggests that transsaccadic integration is limited by early, independent, sensory noise acting separately on pre- and postsaccadic signals.
Collapse
Affiliation(s)
- Emma E M Stewart
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany
| | - Alexander C Schütz
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany
| |
Collapse
|
7
|
Yoshimoto S, Takeuchi T. Effect of spatial attention on spatiotopic visual motion perception. J Vis 2019; 19:4. [PMID: 30943532 DOI: 10.1167/19.4.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We almost never experience visual instability, despite retinal image instability induced by eye movements. How the stability of visual perception is maintained through spatiotopic representation remains a matter of debate. The discrepancies observed in the findings of existing neuroscience studies regarding spatiotopic representation partly originate from differences in regard to how attention is deployed to stimuli. In this study, we psychophysically examined whether spatial attention is needed to perceive spatiotopic visual motion. For this purpose, we used visual motion priming, which is a phenomenon in which a preceding priming stimulus modulates the perceived moving direction of an ambiguous test stimulus, such as a drifting grating that phase shifts by 180°. To examine the priming effect in different coordinates, participants performed a saccade soon after the offset of a primer. The participants were tasked with judging the direction of a subsequently presented test stimulus. To control the effect of spatial attention, the participants were asked to conduct a concurrent dot contrast-change detection task after the saccade. Positive priming was prominent in spatiotopic conditions, whereas negative priming was dominant in retinotopic conditions. At least a 600-ms interval between the priming and test stimuli was needed to observe positive priming in spatiotopic coordinates. When spatial attention was directed away from the location of the test stimulus, spatiotopic positive motion priming completely disappeared; meanwhile, the spatiotopic positive motion priming at shorter interstimulus intervals was enhanced when spatial attention was directed to the location of the test stimulus. These results provide evidence that an attentional resource is requisite for developing spatiotopic representation more quickly.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- Graduate School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan
| | - Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Kanagawa, Japan
| |
Collapse
|
8
|
Abstract
The perceptual consequences of eye movements are manifold: Each large saccade is accompanied by a drop of sensitivity to luminance-contrast, low-frequency stimuli, impacting both conscious vision and involuntary responses, including pupillary constrictions. They also produce transient distortions of space, time, and number, which cannot be attributed to the mere motion on the retinae. All these are signs that the visual system evokes active processes to predict and counteract the consequences of saccades. We propose that a key mechanism is the reorganization of spatiotemporal visual fields, which transiently increases the temporal and spatial uncertainty of visual representations just before and during saccades. On one hand, this accounts for the spatiotemporal distortions of visual perception; on the other hand, it implements a mechanism for fusing pre- and postsaccadic stimuli. This, together with the active suppression of motion signals, ensures the stability and continuity of our visual experience.
Collapse
Affiliation(s)
- Paola Binda
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, 56123 Pisa, Italy;,
- CNR Institute of Neuroscience, 56123 Pisa, Italy
| | - Maria Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, 56123 Pisa, Italy;,
- IRCCS Fondazione Stella-Maris, Calambrone, 56128 Pisa, Italy
| |
Collapse
|
9
|
Lappi O. The Racer's Mind-How Core Perceptual-Cognitive Expertise Is Reflected in Deliberate Practice Procedures in Professional Motorsport. Front Psychol 2018; 9:1294. [PMID: 30150949 PMCID: PMC6099114 DOI: 10.3389/fpsyg.2018.01294] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/05/2018] [Indexed: 01/17/2023] Open
Abstract
The exceptional performance of elite practitioners in domains like sports or chess is not a reflection of just exceptional general cognitive ability or innate sensorimotor superiority. Decades of research on expert performance has consistently shown that experts in all fields go to extraordinary lengths to acquire their perceptual-cognitive and motor abilities. Deliberate Practice (DP) refers to special (sub)tasks that are designed to give immediate and accurate feedback and performed repetitively with the explicit goal of improving performance. DP is generally agreed to be one of the key ingredients in acquisition of expertise (not necessarily the only one). Analyzing in detail the specific aspects of performance targeted by DP procedures may shed light on the underlying cognitive processes that support expert performance. Document analysis of professional coaching literature is one knowledge elicitation method that can be used in the early phases of inquiry to glean domain information about the skills experts in a field are required to develop. In this study this approach is applied to the domain of motor racing - specifically the perceptual-cognitive expertise enabling high-speed curve negotiation. A systematic review procedure is used to establish a corpus of texts covering the entire 60 years of professional motorsport textbooks. Descriptions of specific training procedures (that can be unambiguously interpreted as DP procedures) are extracted, and then analyzed within the hierarchical task analysis framework driver modeling. Hypotheses about the underlying cognitive processes are developed on the basis of this material. In the traditional psychological literature, steering and longitudinal control are typically considered “simple” reactive tracking tasks (model-free feedback control). The present findings suggest that—as in other forms expertise—expert level driving skill is in fact dependent on vast body of knowledge, and driven by top-down information. The knowledge elicitation in this study represents a first step toward a deeper psychological understanding of the complex cognitive underpinnings of expert performance in this domain.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science, Department of Digital Humanities and Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| |
Collapse
|
10
|
Shioiri S, Kobayashi M, Matsumiya K, Kuriki I. Spatial representations of the viewer's surroundings. Sci Rep 2018; 8:7171. [PMID: 29740127 PMCID: PMC5940847 DOI: 10.1038/s41598-018-25433-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 04/23/2018] [Indexed: 11/13/2022] Open
Abstract
Spatial representation surrounding a viewer including outside the visual field is crucial for moving around the three-dimensional world. To obtain such spatial representations, we predict that there is a learning process that integrates visual inputs from different viewpoints covering all the 360° visual angles. We report here the learning effect of the spatial layouts on six displays arranged to surround the viewer, showing shortening of visual search time on surrounding layouts that are repeatedly used (contextual cueing effect). The learning effect is found even in the time to reach the display with the target as well as the time to reach the target within the target display, which indicates that there is an implicit learning effect on spatial configurations of stimulus elements across displays. Since, furthermore, the learning effect is found between layouts and the target presented on displays located even 120° apart, this effect should be based on the representation that covers visual information far outside the visual field.
Collapse
Affiliation(s)
- Satoshi Shioiri
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan. .,Graduate School of Information Sciences, Tohoku University, Sendai, Japan.
| | - Masayuki Kobayashi
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Kazumichi Matsumiya
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.,Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Ichiro Kuriki
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.,Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| |
Collapse
|
11
|
Bansal S, Ford JM, Spering M. The function and failure of sensory predictions. Ann N Y Acad Sci 2018; 1426:199-220. [PMID: 29683518 DOI: 10.1111/nyas.13686] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 02/26/2018] [Accepted: 02/27/2018] [Indexed: 01/24/2023]
Abstract
Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms.
Collapse
Affiliation(s)
- Sonia Bansal
- Maryland Psychiatric Research Center, University of Maryland, Catonsville, Maryland
| | - Judith M Ford
- University of California and Veterans Affairs Medical Center, San Francisco, California
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
12
|
The reference frame for encoding and retention of motion depends on stimulus set size. Atten Percept Psychophys 2017; 79:888-910. [PMID: 28092077 DOI: 10.3758/s13414-016-1258-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
Collapse
|
13
|
Aagten-Murphy D, Burr D. Adaptation to numerosity requires only brief exposures, and is determined by number of events, not exposure duration. J Vis 2017; 16:22. [PMID: 27580042 PMCID: PMC5053365 DOI: 10.1167/16.10.22] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Exposure to a patch of dots produces a repulsive shift in the perceived numerosity of subsequently viewed dot patches. Although a remarkably strong effect, in which the perceived numerosity can be shifted by up to 50% of the actual numerosity, very little is known about the temporal dynamics. Here we demonstrate a novel adaptation paradigm that allows numerosity adaptation to be rapidly induced at several distinct locations simultaneously. We show that not only is this adaptation to numerosity spatially specific, with different locations of the visual field able to be adapted to high, low, or neutral stimuli, but it can occur with only very brief periods of adaptation. Further investigation revealed that the adaptation effect was primarily driven by the number of unique adapting events that had occurred and not by either the duration of each event or the total duration of exposure to adapting stimuli. This event-based numerosity adaptation appears to fit well with statistical models of adaptation in which the dynamic adjustment of perceptual experiences, based on both the previous experience of the stimuli and the current percept, acts to optimize the limited working range of perception. These results implicate a highly plastic mechanism for numerosity perception, which is dependent on the number of discrete adaptation events, and also demonstrate a quick and efficient paradigm suitable for examining the temporal properties of adaptation.
Collapse
|
14
|
Lappi O, Rinkkala P, Pekkanen J. Systematic Observation of an Expert Driver's Gaze Strategy-An On-Road Case Study. Front Psychol 2017; 8:620. [PMID: 28496422 PMCID: PMC5406466 DOI: 10.3389/fpsyg.2017.00620] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 04/04/2017] [Indexed: 11/13/2022] Open
Abstract
In this paper we present and qualitatively analyze an expert driver's gaze behavior in natural driving on a real road, with no specific experimental task or instruction. Previous eye tracking research on naturalistic tasks has revealed recurring patterns of gaze behavior that are surprisingly regular and repeatable. Lappi (2016) identified in the literature seven “qualitative laws of gaze behavior in the wild”: recurring patterns that tend to go together, the more so the more naturalistic the setting, all of them expected in extended sequences of fully naturalistic behavior. However, no study to date has observed all in a single experiment. Here, we wanted to do just that: present observations supporting all the “laws” in a single behavioral sequence by a single subject. We discuss the laws in terms of unresolved issues in driver modeling and open challenges for experimental and theoretical development.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science, University of HelsinkiHelsinki, Finland
| | - Paavo Rinkkala
- Traffic Research Unit, University of HelsinkiHelsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, University of HelsinkiHelsinki, Finland.,Traffic Research Unit, University of HelsinkiHelsinki, Finland
| |
Collapse
|
15
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
16
|
Abstract
Saccadic remapping, a presaccadic increase in neural activity when a saccade is about to bring an object into a neuron's receptive field, may be crucial for our perception of a stable world. Studies of perception and saccadic remapping, like ours, focus on the presaccadic acquisition of information from the saccade target, with no direct reference to underlying physiology. While information is known to be acquired prior to a saccade, it is unclear whether object-selective or feature-specific information is remapped. To test this, we performed a series of psychophysical experiments in which we presented a peripheral, nonfoveated face as a presaccadic target. The target face disappeared at saccade onset. After making a saccade to the location of the peripheral target face (which was no longer visible), subjects misperceived the expression of a subsequent, foveally presented neutral face as being repelled away from the peripheral presaccadic face target. This effect was similar to a sequential shape contrast or negative aftereffect but required a saccade, because covert attention was not sufficient to generate the illusion. Additional experiments further revealed that inverting the faces disrupted the illusion, suggesting that presaccadic remapping is object-selective and not based on low-level features. Our results demonstrate that saccadic remapping can be an object-selective process, spatially tuned to the target of the saccade and distinct from covert attention in the absence of a saccade.
Collapse
|
17
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
18
|
Mikellidou K, Cicchini GM, Thompson PG, Burr DC. The oblique effect is both allocentric and egocentric. J Vis 2015; 15:24. [PMID: 26129862 DOI: 10.1167/15.8.24] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect-superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)-is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple "compulsory fusion" model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination.
Collapse
|
19
|
Abstract
Alfred L. Yarbus was among the first to demonstrate that eye movements actively serve our perceptual and cognitive goals, a crucial recognition that is at the heart of today's research on active vision. He realized that not the changes in fixation stick in memory but the changes in shifts of attention. Indeed, oculomotor control is tightly coupled to functions as fundamental as attention and memory. This tight relationship offers an intriguing perspective on transsaccadic perceptual continuity, which we experience despite the fact that saccades cause rapid shifts of the image across the retina. Here, I elaborate this perspective based on a series of psychophysical findings. First, saccade preparation shapes the visual system's priorities; it enhances visual performance and perceived stimulus intensity at the targets of the eye movement. Second, before saccades, the deployment of visual attention is updated, predictively facilitating perception at those retinal locations that will be relevant once the eyes land. Third, saccadic eye movements strongly affect the contents of visual memory, highlighting their crucial role for which parts of a scene we remember or forget. Together, these results provide insights on how attentional processes enable the visual system to cope with the retinal consequences of saccades.
Collapse
Affiliation(s)
- Martin Rolfs
- Department of Psychology, Humboldt Universität zu Berlin, GermanyBernstein Center for Computational Neuroscience, Humboldt Universität zu Berlin, Germany
| |
Collapse
|
20
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
21
|
Abstract
In strabismus, potentially either eye can inform the brain about the location of a target so that an accurate saccade can be made. Sixteen human subjects with alternating exotropia were tested dichoptically while viewing stimuli on a tangent screen. Each trial began with a fixation cross visible to only one eye. After the subject fixated the cross, a peripheral target visible to only one eye flashed briefly. The subject's task was to look at it. As a rule, the eye to which the target was presented was the eye that acquired the target. However, when stimuli were presented in the far nasal visual field, subjects occasionally performed a "crossover" saccade by placing the other eye on the target. This strategy avoided the need to make a large adducting saccade. In such cases, information about target location was obtained by one eye and used to program a saccade for the other eye, with a corresponding latency increase. In 10/16 subjects, targets were presented on some trials to both eyes. Binocular sensory maps were also compiled to delineate the portions of the visual scene perceived with each eye. These maps were compared with subjects' pattern of eye choice for target acquisition. There was a correspondence between suppression scotoma maps and the eye used to acquire peripheral targets. In other words, targets were fixated by the eye used to perceive them. These studies reveal how patients with alternating strabismus, despite eye misalignment, manage to localize and capture visual targets in their environment.
Collapse
|
22
|
Abstract
Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Cognitive Neuroscience (INM3), Institute of Neuroscience and Medicine, Research Centre Juelich, Juelich, Germany
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy Scientific Institute Stella Maris (IRCSS), Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, Florence, Italy Institute of Neuroscience CNR, Pisa, Italy
| |
Collapse
|
23
|
Jiang YV, Swallow KM. Changing viewer perspectives reveals constraints to implicit visual statistical learning. J Vis 2014; 14:14.12.3. [PMID: 25294640 DOI: 10.1167/14.12.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA
| |
Collapse
|
24
|
Zimmermann E, Morrone MC, Burr DC. Buildup of spatial information over time and across eye-movements. Behav Brain Res 2014; 275:281-7. [PMID: 25224817 DOI: 10.1016/j.bbr.2014.09.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/04/2014] [Accepted: 09/07/2014] [Indexed: 11/27/2022]
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Psychology Department, University of Florence, Italy, Neuroscience Institute, National Research Council, Pisa, Italy.
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123 Pisa, Italy; Scientific Institute Stella Maris (IRCSS), viale del Tirreno 331, 56018 Calambrone, Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, via San Salvi 12, 50135 Florence, Italy; Institute of Neuroscience CNR, via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
25
|
MacInnes WJ, Hunt AR. Attentional load interferes with target localization across saccades. Exp Brain Res 2014; 232:3737-48. [PMID: 25138910 DOI: 10.1007/s00221-014-4062-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 08/01/2014] [Indexed: 11/30/2022]
Abstract
The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization.
Collapse
Affiliation(s)
- W Joseph MacInnes
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK
| | | |
Collapse
|
26
|
Jiang YV, Won BY, Swallow KM, Mussack DM. Spatial reference frame of attention in a large outdoor environment. J Exp Psychol Hum Percept Perform 2014; 40:1346-57. [PMID: 24842066 DOI: 10.1037/a0036779] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A central question about spatial attention is whether it is referenced relative to the external environment or to the viewer. This question has received great interest in recent psychological and neuroscience research, with many but not all, finding evidence for a viewer-centered representation. However, these previous findings were confined to computer-based tasks that involved stationary viewers. Because natural search behaviors differ from computer-based tasks in viewer mobility and spatial scale, it is important to understand how spatial attention is coded in the natural environment. To this end, we created an outdoor visual search task in which participants searched a large (690 square ft), concrete, outdoor space to report which side of a coin on the ground faced up. They began search in the middle of the space and were free to move around. Attentional cuing by statistical learning was examined by placing the coin in 1 quadrant of the search space on 50% of the trials. As in computer-based tasks, participants learned and used these regularities to guide search. However, cuing could be referenced to either the environment or the viewer. The spatial reference frame of attention shows greater flexibility in the natural environment than previously found in the lab.
Collapse
|
27
|
Abstract
One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked.
Collapse
|
28
|
Wang J, Mathalon DH, Roach BJ, Reilly J, Keedy SK, Sweeney JA, Ford JM. Action planning and predictive coding when speaking. Neuroimage 2014; 91:91-8. [PMID: 24423729 DOI: 10.1016/j.neuroimage.2014.01.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Revised: 11/23/2013] [Accepted: 01/03/2014] [Indexed: 12/20/2022] Open
Abstract
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps may be filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders.
Collapse
Affiliation(s)
- Jun Wang
- Department of Psychiatry, University of Texas Southwestern, Dallas, TX 75390, USA
| | - Daniel H Mathalon
- San Francisco VA Medical Center, San Francisco, CA 94121, USA; Department of Psychiatry, University of California, San Francisco, CA 94121, USA
| | - Brian J Roach
- San Francisco VA Medical Center, San Francisco, CA 94121, USA
| | - James Reilly
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA
| | - Sarah K Keedy
- Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL 60637, USA
| | - John A Sweeney
- Department of Psychiatry, University of Texas Southwestern, Dallas, TX 75390, USA
| | - Judith M Ford
- San Francisco VA Medical Center, San Francisco, CA 94121, USA; Department of Psychiatry, University of California, San Francisco, CA 94121, USA.
| |
Collapse
|
29
|
Compression and suppression of shifting receptive field activity in frontal eye field neurons. J Neurosci 2014; 33:18259-69. [PMID: 24227735 DOI: 10.1523/jneurosci.2964-13.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Before each saccade, neurons in frontal eye field anticipate the impending eye movement by showing sensitivity to stimuli appearing where the neuron's receptive field will be at the end of the saccade, referred to as the future field (FF) of the neuron. We explored the time course of this anticipatory activity in monkeys by briefly flashing stimuli in the FF at different times before saccades. Different neurons showed substantial variation in FF time course, but two salient observations emerged. First, when we compared the time span of stimulus probes before the saccade to the time span of FF activity, we found a striking temporal compression of FF activity, similar to compression seen for perisaccadic stimuli in human psychophysics. Second, neurons with distinct FF activity also showed suppression at the time of the saccade. The increase in FF activity and the decrease with suppression were temporally independent, making the patterns of activity difficult to separate. We resolved this by constructing a simple model with values for the start, peak, and duration of FF activity and suppression for each neuron. The model revealed the different time courses of FF sensitivity and suppression, suggesting that information about the impending saccade triggering suppression reaches the frontal eye field through a different pathway, or a different mechanism, than that triggering FF activity. Recognition of the variations in the time course of anticipatory FF activity provides critical information on its function and its relation to human visual perception at the time of the saccade.
Collapse
|
30
|
Ninaus M, Kober SE, Witte M, Koschutnig K, Stangl M, Neuper C, Wood G. Neural substrates of cognitive control under the belief of getting neurofeedback training. Front Hum Neurosci 2013; 7:914. [PMID: 24421765 PMCID: PMC3872730 DOI: 10.3389/fnhum.2013.00914] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2013] [Accepted: 12/13/2013] [Indexed: 11/22/2022] Open
Abstract
Learning to modulate one's own brain activity is the fundament of neurofeedback (NF) applications. Besides the neural networks directly involved in the generation and modulation of the neurophysiological parameter being specifically trained, more general determinants of NF efficacy such as self-referential processes and cognitive control have been frequently disregarded. Nonetheless, deeper insight into these cognitive mechanisms and their neuronal underpinnings sheds light on various open NF related questions concerning individual differences, brain-computer interface (BCI) illiteracy as well as a more general model of NF learning. In this context, we investigated the neuronal substrate of these more general regulatory mechanisms that are engaged when participants believe that they are receiving NF. Twenty healthy participants (40-63 years, 10 female) performed a sham NF paradigm during fMRI scanning. All participants were novices to NF-experiments and were instructed to voluntarily modulate their own brain activity based on a visual display of moving color bars. However, the bar depicted a recording and not the actual brain activity of participants. Reports collected at the end of the experiment indicate that participants were unaware of the sham feedback. In comparison to a passive watching condition, bilateral insula, anterior cingulate cortex and supplementary motor and dorsomedial and lateral prefrontal areas were activated when participants actively tried to control the bar. In contrast, when merely watching moving bars, increased activation in the left angular gyrus was observed. These results show that the intention to control a moving bar is sufficient to engage a broad frontoparietal and cingulo-opercular network involved in cognitive control. The results of the present study indicate that tasks such as those generally employed in NF training recruit the neuronal correlates of cognitive control even when only sham NF is presented.
Collapse
Affiliation(s)
- Manuel Ninaus
- Department of Psychology, University of GrazGraz, Austria
| | | | - Matthias Witte
- Department of Psychology, University of GrazGraz, Austria
| | | | - Matthias Stangl
- Aging and Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE)Magdeburg, Germany
| | - Christa Neuper
- Department of Psychology, University of GrazGraz, Austria
- Laboratory of Brain-Computer Interfaces, Institute for Knowledge Discovery, University of Technology GrazGraz, Austria
| | - Guilherme Wood
- Department of Psychology, University of GrazGraz, Austria
| |
Collapse
|