51
|
Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts. J Neurosci 2018; 38:8243-8250. [PMID: 30104339 DOI: 10.1523/jneurosci.0736-18.2018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 07/23/2018] [Accepted: 07/24/2018] [Indexed: 11/21/2022] Open
Abstract
Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between "vision for action" and "vision for perception" for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception.SIGNIFICANCE STATEMENT Accurately localizing moving objects is a computational challenge for the brain due to the inevitable delays that result from neural transmission. To solve this, the brain might implement motion extrapolation, predicting where an object ought to be at the present moment. Here, we use the flash-grab effect to induce perceptual position shifts and show that the latency of imminent saccades predicts the perceived position of the objects they target. This counterintuitive finding is important because it not only shows that motion extrapolation mechanisms indeed work to reduce the behavioral impact of neural transmission delays in the human brain, but also that these mechanisms are closely matched in the perceptual and oculomotor systems.
Collapse
|
52
|
Ueda H, Abekawa N, Gomi H. The faster you decide, the more accurate localization is possible: Position representation of "curveball illusion" in perception and eye movements. PLoS One 2018; 13:e0201610. [PMID: 30080898 PMCID: PMC6078290 DOI: 10.1371/journal.pone.0201610] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 07/18/2018] [Indexed: 11/18/2022] Open
Abstract
When the inside texture of a moving object moves, the perceived motion of the object is often distorted toward the direction of the texture's motion (motion-induced position shift), and such perceptual distortion accumulates while the object is watched, causing what is known as the curveball illusion. In a recent study, however, the accumulation of the position error was not observed in saccadic eye movements. Here, we examined whether the position of the illusory object is represented independently in the perceptual and saccadic systems. In the experiments, the stimulus of the curveball illusion was adopted to examine the temporal change in the position representation for saccadic eye movements and for perception by varying the elapsed time from the input of visual information to saccade onset and perceptual judgment, respectively. The results showed that the temporal accumulation of the motion-induced position shift is observed not only in perception but also in saccadic eye movements. In the saccade tasks, the landing positions of saccades gradually shifted to the illusory perceived position as the elapsed time from the target offset to the saccade "go" signal increased. Furthermore, in the perception task, shortening the time between the target offset and the perceptual judgment reduced the size of the illusion effect. Therefore, these results argue against the idea of dissociation between saccadic and perceptual localization of a moving object suggested in the previous study, in which saccades were measured in a rushed way while perceptual responses were measured without time constraint. Instead, the similar temporal trends of these effects imply a common or similar target representation for perception and eye movements which dynamically changes over the course of evidence accumulation.
Collapse
Affiliation(s)
- Hiroshi Ueda
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| | - Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| |
Collapse
|
53
|
Liu S, Tse PU, Cavanagh P. Meridian interference reveals neural locus of motion-induced position shifts. J Neurophysiol 2018. [PMID: 29513148 DOI: 10.1152/jn.00876.2017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When a Gabor patch moves along a path in one direction while its internal texture drifts orthogonally to this path, it can appear to deviate from its physical path by 45° or more. This double-drift illusion is different from other motion-induced position shift effects in several ways: it has an integration period of over a second; the illusory displacement that accumulates over a second or more is orthogonal to rather than along the motion path; the perceptual deviations are much larger; and they have little or no effect on eye movements to the target. In this study we investigated the underlying neural mechanisms of the motion integration and position processing for this double-drift stimulus by testing possible anatomical constraints on its magnitude. We found that the illusion was reduced at the vertical and horizontal meridians when the perceptual path would cross or be driven toward the meridian, but not at other locations or other motion directions. The disruption of the accumulation of the position error at both the horizontal and vertical meridians suggests a central role of quadrantic areas in the generation of this type of motion-induced position shift. NEW & NOTEWORTHY The remarkably strong double-drift illusion is disrupted at both the vertical and horizontal meridians. We propose that this finding is the behavioral consequence of the anatomical gaps at both meridians, suggesting that neural areas with quadrantic representations (e.g., V2, V3) are the initial locus of this motion-induced position shift. This result rules out V1 as the source of the illusion because it has an anatomical break only at the vertical meridian.
Collapse
Affiliation(s)
- Sirui Liu
- Department of Psychological and Brian Sciences, Dartmouth College , Hanover, New Hampshire
| | - Peter U Tse
- Department of Psychological and Brian Sciences, Dartmouth College , Hanover, New Hampshire
| | - Patrick Cavanagh
- Department of Psychological and Brian Sciences, Dartmouth College , Hanover, New Hampshire.,Department of Psychology, Glendon College , Toronto, Ontario , Canada
| |
Collapse
|
54
|
Motion and position shifts induced by the double-drift stimulus are unaffected by attentional load. Atten Percept Psychophys 2018; 80:884-893. [DOI: 10.3758/s13414-018-1492-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
55
|
Ma Z, Watamaniuk SNJ, Heinen SJ. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects. J Vis 2017; 17:20. [PMID: 29090315 PMCID: PMC5665499 DOI: 10.1167/17.12.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets.
Collapse
Affiliation(s)
- Zheng Ma
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | - Stephen J Heinen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
56
|
Comparing eye movements during position tracking and identity tracking: No evidence for separate systems. Atten Percept Psychophys 2017; 80:453-460. [PMID: 29159571 DOI: 10.3758/s13414-017-1447-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is an ongoing debate as to whether people track multiple moving objects in a serial fashion or with a parallel mechanism. One recent study compared eye movements when observers tracked identical objects (Multiple Object Tracking-MOT task) versus when they tracked the identities of different objects (Multiple Identity Tracking-MIT task). Distinct eye-movement patterns were found and attributed to two separate tracking systems. However, the same results could be caused by differences in the stimuli viewed during tracking. In the present study, object identities in the MIT task were invisible during tracking, so observers performed MOT and MIT tasks with identical stimuli. Observer were able to track either position and identity depending on the task. There was no difference in eye movements between position tracking and identity tracking. This result suggests that, while observers can use different eye-movement strategies in MOT and MIT, it is not necessary.
Collapse
|
57
|
Massendari D, Lisi M, Collins T, Cavanagh P. Memory-guided saccades show effect of a perceptual illusion whereas visually guided saccades do not. J Neurophysiol 2017; 119:62-72. [PMID: 28954892 DOI: 10.1152/jn.00229.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The double-drift stimulus (a drifting Gabor with orthogonal internal motion) generates a large discrepancy between its physical and perceived path. Surprisingly, saccades directed to the double-drift stimulus land along the physical, and not perceived, path (Lisi M, Cavanagh P. Curr Biol 25: 2535-2540, 2015). We asked whether memory-guided saccades exhibited the same dissociation from perception. Participants were asked to keep their gaze centered on a fixation dot while the double-drift stimulus moved back and forth on a linear path in the periphery. The offset of the fixation was the go signal to make a saccade to the target. In the visually guided saccade condition, the Gabor kept moving on its trajectory after the go signal but was removed once the saccade began. In the memory conditions, the Gabor disappeared before or at the same time as the go-signal (0- to 1,000-ms delay) and participants made a saccade to its remembered location. The results showed that visually guided saccades again targeted the physical rather than the perceived location. However, memory saccades, even with 0-ms delay, had landing positions shifted toward the perceived location. Our result shows that memory- and visually guided saccades are based on different spatial information. NEW & NOTEWORTHY We compared the effect of a perceptual illusion on two types of saccades, visually guided vs. memory-guided saccades, and found that whereas visually guided saccades were almost unaffected by the perceptual illusion, memory-guided saccades exhibited a strong effect of the illusion. Our result is the first evidence in the literature to show that visually and memory-guided saccades use different spatial representations.
Collapse
Affiliation(s)
- Delphine Massendari
- Laboratoire Psychologie de la Perception, CNRS UMR 8248, Université Paris Descartes , Paris , France
| | - Matteo Lisi
- Centre for Applied Vision Research, City University of London , London , United Kingdom
| | - Thérèse Collins
- Laboratoire Psychologie de la Perception, CNRS UMR 8248, Université Paris Descartes , Paris , France
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, CNRS UMR 8248, Université Paris Descartes , Paris , France.,Department Psychological and Brain Sciences, Dartmouth College , Hanover, New Hampshire
| |
Collapse
|
58
|
Goffart L, Bourrelly C, Quinet J. Synchronizing the tracking eye movements with the motion of a visual target: Basic neural processes. PROGRESS IN BRAIN RESEARCH 2017; 236:243-268. [PMID: 29157414 DOI: 10.1016/bs.pbr.2017.07.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In primates, the appearance of an object moving in the peripheral visual field elicits an interceptive saccade that brings the target image onto the foveae. This foveation is then maintained more or less efficiently by slow pursuit eye movements and subsequent catch-up saccades. Sometimes, the tracking is such that the gaze direction looks spatiotemporally locked onto the moving object. Such a spatial synchronism is quite spectacular when one considers that the target-related signals are transmitted to the motor neurons through multiple parallel channels connecting separate neural populations with different conduction speeds and delays. Because of the delays between the changes of retinal activity and the changes of extraocular muscle tension, the maintenance of the target image onto the fovea cannot be driven by the current retinal signals as they correspond to past positions of the target. Yet, the spatiotemporal coincidence observed during pursuit suggests that the oculomotor system is driven by a command estimating continuously the current location of the target, i.e., where it is here and now. This inference is also supported by experimental perturbation studies: when the trajectory of an interceptive saccade is experimentally perturbed, a correction saccade is produced in flight or after a short delay, and brings the gaze next to the location where unperturbed saccades would have landed at about the same time, in the absence of visual feedback. In this chapter, we explain how such correction can be supported by previous visual signals without assuming "predictive" signals encoding future target locations. We also describe the basic neural processes which gradually yield the synchronization of eye movements with the target motion. When the process fails, the gaze is driven by signals related to past locations of the target, not by estimates to its upcoming locations, and a catch-up is made to reinitiate the synchronization.
Collapse
Affiliation(s)
- Laurent Goffart
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France.
| | - Clara Bourrelly
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; Laboratoire Psychologie de la Perception, UMR 8242, Centre National de la Recherche Scientifique, Université Paris Descartes, Paris, France
| | - Julie Quinet
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France
| |
Collapse
|
59
|
Abstract
A spinning, moving object, such as a football with a surface texture, combines motion signals from rotation and translation. The interaction between these two kinds of signal was studied psychophysically with moving, circular clouds of dots, which also could move within the cloud. If the cloud moved near-vertically downwards but the dots within it moved obliquely, the apparent path of the cloud was attracted to that of the dots, as previously demonstrated with moving Gabor patches (Tse & Hseih Vision Research, 46, 3881-3885, 2006; Lisi & Cavanagh Current Biology, 25, 2535-40, 2015). This attractive effect was enhanced in parafoveal viewing and by not presenting a frame around the dots. A larger effect in the opposite direction (repulsion) was found for the perceived direction of the dots when they moved near-vertically and the cloud containing them moved obliquely. These results are discussed in relation to Gestalt principles of perceived relative motion and, more recently, Bayes-inspired accounts of the interaction between local and global motion.
Collapse
|
60
|
Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci U S A 2017; 114:E3573-E3582. [PMID: 28396415 DOI: 10.1073/pnas.1615504114] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual sensitivity varies across the visual field in several characteristic ways. For example, sensitivity declines sharply in peripheral (vs. foveal) vision and is typically worse in the upper (vs. lower) visual field. These variations can affect processes ranging from acuity and crowding (the deleterious effect of clutter on object recognition) to the precision of saccadic eye movements. Here we examine whether these variations can be attributed to a common source within the visual system. We first compared the size of crowding zones with the precision of saccades using an oriented clock target and two adjacent flanker elements. We report that both saccade precision and crowded-target reports vary idiosyncratically across the visual field with a strong correlation across tasks for all participants. Nevertheless, both group-level and trial-by-trial analyses reveal dissociations that exclude a common representation for the two processes. We therefore compared crowding with two measures of spatial localization: Landolt-C gap resolution and three-dot bisection. Here we observe similar idiosyncratic variations with strong interparticipant correlations across tasks despite considerably finer precision. Hierarchical regression analyses further show that variations in spatial precision account for much of the variation in crowding, including the correlation between crowding and saccades. Altogether, we demonstrate that crowding, spatial localization, and saccadic precision show clear dissociations, indicative of independent spatial representations, whilst nonetheless sharing idiosyncratic variations in spatial topology. We propose that these topological idiosyncrasies are established early in the visual system and inherited throughout later stages to affect a range of higher-level representations.
Collapse
|
61
|
Errors in interception can be predicted from errors in perception. Cortex 2017; 98:49-59. [PMID: 28454717 DOI: 10.1016/j.cortex.2017.03.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 02/16/2017] [Accepted: 03/14/2017] [Indexed: 01/02/2023]
Abstract
It has been hypothesised that our actions are less susceptible to visual illusions than our perceptual judgements because similar information is processed for perception and action in separate pathways. We test this hypothesis for subjects intercepting a moving object that appears to move at a different speed than its true speed due to an illusion. The object was a moving Gabor patch: a sinusoidal grating of which the luminance contrast is modulated by a two-dimensional Gaussian. We manipulated the patch's apparent speed by moving the grating relative to the Gaussian. We used separate two-interval forced choice discrimination tasks to determine how moving the grating influenced ten people's judgements of the object's position and velocity while they were fixating. Based on their perceptual judgements, and knowing that our ability to correct for errors that arise from relying on incorrect judgements are limited by a sensorimotor delay of about 100 msec, we predicted the extent to which subjects would tap ahead of or behind similar targets when trying to intercept them at the fixation location. The predicted errors closely matched the actual errors that subjects made when trying to intercept the targets. This finding does not support the two visual streams hypothesis. The results are consistent with the idea that the extent to which an illusion influences an action tells us something about the extent to which the action relies on the percept in question.
Collapse
|
62
|
Hughes AE, Jones C, Joshi K, Tolhurst DJ. Diverted by dazzle: perceived movement direction is biased by target pattern orientation. Proc Biol Sci 2017; 284:20170015. [PMID: 28275144 PMCID: PMC5360933 DOI: 10.1098/rspb.2017.0015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 02/09/2017] [Indexed: 11/12/2022] Open
Abstract
'Motion dazzle' is the hypothesis that predators may misjudge the speed or direction of moving prey which have high-contrast patterning, such as stripes. However, there is currently little experimental evidence that such patterns cause visual illusions. Here, observers binocularly tracked a Gabor target, moving with a linear trajectory randomly chosen within 18° of the horizontal. This target then became occluded, and observers were asked to judge where they thought it would later cross a vertical line to the side. We found that internal motion of the stripes within the Gabor biased judgements as expected: Gabors with upwards internal stripe motion relative to the overall direction of motion were perceived to be crossing above Gabors with downwards internal stripe movement. However, surprisingly, we found a much stronger effect of the rigid pattern orientation. Patches with oblique stripes pointing upwards relative to the direction of motion were perceived to cross above patches with downward-pointing stripes. This effect occurred only at high speeds, suggesting that it may reflect an orientation-dependent effect in which spatial signals are used in direction judgements. These findings have implications for our understanding of motion dazzle mechanisms and how human motion and form processing interact.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology and Language Sciences, University College London, 26 Bedford Way, London WC1H 0AP, UK
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - Christian Jones
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - Kaustuv Joshi
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - David J Tolhurst
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| |
Collapse
|
63
|
Zhu JE, Ma WJ. Orientation-dependent biases in length judgments of isolated stimuli. J Vis 2017; 17:20. [PMID: 28245499 DOI: 10.1167/17.2.20] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Vertical line segments tend to be perceived as longer than horizontal ones of the same length, but this may in part be due to configuration effects. To minimize such effects, we used isolated line segments in a two-interval, forced choice paradigm, not limiting ourselves to horizontal and vertical. We fitted psychometric curves using a Bayesian method that assumes that, for a given subject, the lapse rate is the same across all conditions. The closer a line segment's orientation was to vertical, the longer it was perceived to be. Moreover, subjects tended to report the standard line (in the second interval) as longer. The data were well described by a model that contains both an orientation-dependent and an interval-dependent multiplicative bias. Using this model, we estimated that a vertical line was on average perceived as 9.2% ± 2.1% longer than a horizontal line, and a second-interval line was on average perceived as 2.4% ± 0.9% longer than a first-interval line. Moving from a descriptive to an explanatory model, we hypothesized that anisotropy in the polar angle of lines in three dimensions underlies the horizontal-vertical illusion, specifically, that line segments more often have a polar angle of 90° (corresponding to the ground plane) than any other polar angle. This model qualitatively accounts not only for the empirical relationship between projected length and projected orientation that predicts the horizontal-vertical illusion, but also for the empirical distribution of projected orientation in photographs of natural scenes and for paradoxical results reported earlier for slanted surfaces.
Collapse
Affiliation(s)
- Jielei Emma Zhu
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| |
Collapse
|
64
|
Montemayor C, Haladjian HH. Perception and Cognition Are Largely Independent, but Still Affect Each Other in Systematic Ways: Arguments from Evolution and the Consciousness-Attention Dissociation. Front Psychol 2017; 8:40. [PMID: 28174551 PMCID: PMC5258763 DOI: 10.3389/fpsyg.2017.00040] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Accepted: 01/06/2017] [Indexed: 01/08/2023] Open
Abstract
The main thesis of this paper is that two prevailing theories about cognitive penetration are too extreme, namely, the view that cognitive penetration is pervasive and the view that there is a sharp and fundamental distinction between cognition and perception, which precludes any type of cognitive penetration. These opposite views have clear merits and empirical support. To eliminate this puzzling situation, we present an alternative theoretical approach that incorporates the merits of these views into a broader and more nuanced explanatory framework. A key argument we present in favor of this framework concerns the evolution of intentionality and perceptual capacities. An implication of this argument is that cases of cognitive penetration must have evolved more recently and that this is compatible with the cognitive impenetrability of early perceptual stages of processing information. A theoretical approach that explains why this should be the case is the consciousness and attention dissociation framework. The paper discusses why concepts, particularly issues concerning concept acquisition, play an important role in the interaction between perception and cognition.
Collapse
Affiliation(s)
- Carlos Montemayor
- Department of Philosophy, San Francisco State University San Francisco, CA, USA
| | - Harry H Haladjian
- Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes Paris, France
| |
Collapse
|
65
|
Haladjian HH, Montemayor C. Artificial consciousness and the consciousness-attention dissociation. Conscious Cogn 2016; 45:210-225. [PMID: 27656787 DOI: 10.1016/j.concog.2016.08.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Accepted: 08/12/2016] [Indexed: 01/02/2023]
Abstract
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.
Collapse
Affiliation(s)
- Harry Haroutioun Haladjian
- Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France.
| | - Carlos Montemayor
- San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA.
| |
Collapse
|
66
|
Abstract
An unresolved question in vision research is whether perceptual decision making and action are based on the same or on different neural representations. Here, we address this question for a straightforward task, the judgment of location. In our experiment, observers decided on the closer of two peripheral objects—situated on the horizontal meridian in opposite hemifields—and made a saccade to indicate their choice. Correct saccades landed close to the actual (physical) location of the target. However, in case of errors, saccades went in the direction of the more distant object, yet landed on a position approximating that of the closer one. Our finding supports the notion that perception and action-related decisions on object location rely on the same neural representation.
Collapse
Affiliation(s)
- Funda Yildirim
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands
| | - Frans W Cornelissen
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands
| |
Collapse
|
67
|
Kuhn G, Rensink RA. The Vanishing Ball Illusion: A new perspective on the perception of dynamic events. Cognition 2016; 148:64-70. [DOI: 10.1016/j.cognition.2015.12.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 12/08/2015] [Accepted: 12/11/2015] [Indexed: 01/05/2023]
|
68
|
Abstract
Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention.
Collapse
Affiliation(s)
- Jessica E Goold
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover NH, USA
| | - Ming Meng
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover NH, USA
| |
Collapse
|
69
|
Morgan M. Visual Neuroscience: Dissociating Perceptual and Occulomotor Localization of Moving Objects. Curr Biol 2015; 25:R831-3. [DOI: 10.1016/j.cub.2015.08.037] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|