1
|
Giesel M, De Filippi F, Hesse C. Grasping tiny objects. PSYCHOLOGICAL RESEARCH 2024:10.1007/s00426-024-01947-8. [PMID: 38554146 DOI: 10.1007/s00426-024-01947-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 02/29/2024] [Indexed: 04/01/2024]
Abstract
In grasping studies, maximum grip aperture (MGA) is commonly used as an indicator of the object size representation within the visuomotor system. However, a number of additional factors, such as movement safety, comfort, and efficiency, might affect the scaling of MGA with object size and potentially mask perceptual effects on actions. While unimanual grasping has been investigated for a wide range of object sizes, so far very small objects (<5 mm) have not been included. Investigating grasping of these tiny objects is particularly interesting because it allows us to evaluate the three most prominent explanatory accounts of grasping (the perception-action model, the digits-in-space hypothesis, and the biomechanical account) by comparing the predictions that they make for these small objects. In the first experiment, participants ( N = 26 ) grasped and manually estimated the height of square cuboids with heights from 0.5 to 5 mm. In the second experiment, a different sample of participants ( N = 24 ) performed the same tasks with square cuboids with heights from 5 to 20 mm. We determined MGAs, manual estimation apertures (MEA), and the corresponding just-noticeable differences (JND). In both experiments, MEAs scaled with object height and adhered to Weber's law. MGAs for grasping scaled with object height in the second experiment but not consistently in the first experiment. JNDs for grasping never scaled with object height. We argue that the digits-in-space hypothesis provides the most plausible account of the data. Furthermore, the findings highlight that the reliability of MGA as an indicator of object size is strongly task-dependent.
Collapse
Affiliation(s)
- Martin Giesel
- School of Psychology, University of Aberdeen, William Guild Building, Aberdeen, AB24 3FX, UK.
| | - Federico De Filippi
- School of Psychology, University of Aberdeen, William Guild Building, Aberdeen, AB24 3FX, UK
- School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews, KY16 9JP, UK
| | - Constanze Hesse
- School of Psychology, University of Aberdeen, William Guild Building, Aberdeen, AB24 3FX, UK
| |
Collapse
|
2
|
Dual-task interference in action programming and action planning - Evidence from the end-state comfort effect. Acta Psychol (Amst) 2022; 228:103637. [PMID: 35690027 DOI: 10.1016/j.actpsy.2022.103637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 11/22/2022] Open
Abstract
In the present study, we examined the extent of interference between a cognitive task (auditory n-back task) and different aspects of motor performance. Specifically, we wanted to find out whether such interference is more pronounced for aspects of planning as compared to programming. Here, motor planning is represented by a phenomenon called the "end-state comfort effect", the fact that we tolerate uncomfortable initial postures in favour of a more comfortable final posture. We asked participants to grasp differently sized cylindrical objects and to place them on target platforms of varying height (grasp-and-place task), So, participants were required to (1) adjust their hand opening to the object width (action programming) and (2) to plan whether to grasp the object higher or lower in order to be able to place it comfortably onto the low or high target platform. We found that participants demonstrated the end-state comfort effect by anticipating the final posture und planning the movement accordingly with a higher object-grasp for low end-target position and lower object-grasp height for high end-target position, respectively. The auditory task was negatively affected by having to perform a visuomotor task in parallel, suggesting that the two tasks share cognitive and attentional resources. No significant impact from the auditory task on the motor tasks was found. Accordingly, it was not possible to determine which of the two motor aspects (programming or planning) contributed more towards the interference observed in the auditory task. To address this question, we carried out a second experiment. For this second experiment we focussed on the interference effects found in the auditory task and contrasted two versions of the grasp-and-place task. In the first version of the task, the height of the target-shelf varied from trial-to-trial but the width of the target object remained the same. We assumed that this version had high planning demands and low programming demands. In the second version the width of the target object varied and the target-shelf height remained constant. Presumably this increased programming demands but reduced planning demands. Significant interference with the auditory task was only found for the first version, supporting the hypothesis that motor planning requires more cognitive resources and thus creates higher multitasking costs.
Collapse
|
3
|
Whitwell RL, Katz NJ, Goodale MA, Enns JT. The Role of Haptic Expectations in Reaching to Grasp: From Pantomime to Natural Grasps and Back Again. Front Psychol 2020; 11:588428. [PMID: 33391110 PMCID: PMC7773727 DOI: 10.3389/fpsyg.2020.588428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 11/13/2022] Open
Abstract
When we reach to pick up an object, our actions are effortlessly informed by the object's spatial information, the position of our limbs, stored knowledge of the object's material properties, and what we want to do with the object. A substantial body of evidence suggests that grasps are under the control of "automatic, unconscious" sensorimotor modules housed in the "dorsal stream" of the posterior parietal cortex. Visual online feedback has a strong effect on the hand's in-flight grasp aperture. Previous work of ours exploited this effect to show that grasps are refractory to cued expectations for visual feedback. Nonetheless, when we reach out to pretend to grasp an object (pantomime grasp), our actions are performed with greater cognitive effort and they engage structures outside of the dorsal stream, including the ventral stream. Here we ask whether our previous finding would extend to cued expectations for haptic feedback. Our method involved a mirror apparatus that allowed participants to see a "virtual" target cylinder as a reflection in the mirror at the start of all trials. On "haptic feedback" trials, participants reached behind the mirror to grasp a size-matched cylinder, spatially coincident with the virtual one. On "no-haptic feedback" trials, participants reached behind the mirror and grasped into "thin air" because no cylinder was present. To manipulate haptic expectation, we organized the haptic conditions into blocked, alternating, and randomized schedules with and without verbal cues about the availability of haptic feedback. Replicating earlier work, we found the strongest haptic effects with the blocked schedules and the weakest effects in the randomized uncued schedule. Crucially, the haptic effects in the cued randomized schedule was intermediate. An analysis of the influence of the upcoming and immediately preceding haptic feedback condition in the cued and uncued random schedules showed that cuing the upcoming haptic condition shifted the haptic influence on grip aperture from the immediately preceding trial to the upcoming trial. These findings indicate that, unlike cues to the availability of visual feedback, participants take advantage of cues to the availability of haptic feedback, flexibly engaging pantomime, and natural modes of grasping to optimize the movement.
Collapse
Affiliation(s)
- Robert L Whitwell
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| | - Nathan J Katz
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - Melvyn A Goodale
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - James T Enns
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
4
|
Ganel T, Ozana A, Goodale MA. When perception intrudes on 2D grasping: evidence from Garner interference. PSYCHOLOGICAL RESEARCH 2019; 84:2138-2143. [PMID: 31201534 DOI: 10.1007/s00426-019-01216-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 06/08/2019] [Indexed: 11/28/2022]
Abstract
When participants reach out to pick up a real 3-D object, their grip aperture reflects the size of the object well before contact is made. At the same time, the classical psychophysical laws and principles of relative size and shape that govern visual perception do not appear to intrude into the control of such movements, which are instead tuned only to the relevant dimension for grasping. In contrast, accumulating evidence suggests that grasps directed at flat 2D objects are not immune to perceptual effects. Thus, in 2D but not 3D grasping, the aperture of the fingers has been shown to be affected by relative and contextual information about the size and shape of the target object. A notable example of this dissociation comes from studies of Garner interference, which signals holistic processing of shape. Previous research has shown that 3D grasping shows no evidence for Garner interference but 2D grasping does (Freud & Ganel, 2015). In a recent study published in this journal (Löhr-Limpens et al., 2019), participants were presented with 2D objects in a Garner paradigm. The pattern of results closely replicated the previously published results with 2D grasping. Unfortunately, the authors, who appear to be unaware the potential differences between 2D and 3D grasping, used their findings to draw an overgeneralized and unwarranted conclusion about the relation between 3D grasping and perception. In this short methodological commentary, we discuss current literature on aperture shaping during 2D grasping and suggest that researchers should play close attention to the nature of the target stimuli they use before drawing conclusions about visual processing for perception and action.
Collapse
Affiliation(s)
- Tzvi Ganel
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel.
| | - Aviad Ozana
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 5B7, Canada
| |
Collapse
|
5
|
Löhr-Limpens M, Göhringer F, Schenk T, Hesse C. Grasping and perception are both affected by irrelevant information and secondary tasks: new evidence from the Garner paradigm. PSYCHOLOGICAL RESEARCH 2019; 84:1269-1283. [PMID: 30778763 DOI: 10.1007/s00426-019-01151-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/29/2019] [Indexed: 11/30/2022]
Abstract
In their Perception-Action Model (PAM), Goodale and Milner (1992) proposed functionally independent and encapsulated processing of visual information for action and perception. In this context, they postulated that visual input for action is processed in an automatized and analytic manner, which renders visuomotor behaviour immune to perceptual interferences or multitasking costs due to sharing of cognitive resources. Here, we investigate the well-known Garner Interference effect under dual- and single-task conditions in its classic perceptual form as well as in grasping. Garner Interference arises when stimuli are classified along a relevant dimension (e.g., their length), while another irrelevant dimension (e.g., their width) has to be ignored. In the present study, participants were presented with differently sized rectangular objects and either grasped them or classified them as long or short via button presses. We found classical Garner Interference effects in perception as expressed in prolonged reaction times when variations occurred also in the irrelevant object dimension. While reaction times during grasping were not susceptible to Garner Interference, effects were observed in a number of measures that reflect grasping accuracy (i.e., poorer adjustment of grip aperture to object size, prolonged adjustment times, and increased variability of the maximum hand opening when irrelevant object dimensions were varied). In addition, multitasking costs occurred in both perception and action tasks. Thus, our findings challenge the assumption of automaticity in visuomotor behaviour as proposed by the PAM.
Collapse
Affiliation(s)
- Miriam Löhr-Limpens
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany.
| | - Frederic Göhringer
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany
| | - Thomas Schenk
- Lehrstuhl für Klinische Neuropsychologie, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany
| | - Constanze Hesse
- School of Psychology, University of Aberdeen King's College, William Guild Building, Aberdeen, AB24 3FX, UK
| |
Collapse
|
6
|
Leib R, Rubin I, Nisky I. Force feedback delay affects perception of stiffness but not action, and the effect depends on the hand used but not on the handedness. J Neurophysiol 2018; 120:781-794. [PMID: 29766763 DOI: 10.1152/jn.00822.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Interaction with an object often requires the estimation of its mechanical properties. We examined whether the hand that is used to interact with the object and their handedness affected people's estimation of these properties using stiffness estimation as a test case. We recorded participants' responses on a stiffness discrimination of a virtual elastic force field and the grip force applied on the robotic device during the interaction. In half of the trials, the robotic device delayed the participants' force feedback. Consistent with previous studies, delayed force feedback biased the perceived stiffness of the force field. Interestingly, in both left-handed and right-handed participants, for the delayed force field, there was even less perceived stiffness when participants used their left hand than their right hand. This result supports the idea that haptic processing is affected by laterality in the brain, not by handedness. Consistent with previous studies, participants adjusted their applied grip force according to the correct size and timing of the load force regardless of the hand that was used, the handedness, or the delay. This suggests that in all of these conditions, participants were able to form an accurate internal representation of the anticipated trajectory of the load force (size and timing) and that this representation was used for accurate control of grip force independently of the perceptual bias. Thus these results provide additional evidence for the dissociation between action and perception in the processing of delayed information. NEW & NOTEWORTHY Introducing delay to force feedback during interaction with an elastic force field biases the perceived stiffness of the force field. We show that this bias depends on the hand that was used for probing but not on handedness. At the same time, both left-handed and right-handed participants adjusted their applied grip force while using either their left or right hands in anticipation of the correct magnitude and timing despite the delay in load force.
Collapse
Affiliation(s)
- Raz Leib
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Inbar Rubin
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Ilana Nisky
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| |
Collapse
|
7
|
The Size Congruity Effect Vanishes in Grasping: Implications for the Processing of Numerical Information. Sci Rep 2018; 8:2723. [PMID: 29426827 PMCID: PMC5807327 DOI: 10.1038/s41598-018-21003-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 01/29/2018] [Indexed: 01/28/2023] Open
Abstract
Judgments of the physical size in which a numeral is presented are often affected by the task-irrelevant attribute of its numerical magnitude, the Size Congruity Effect (SCE). The SCE is typically interpreted as a marker of the automatic activation of numerical magnitude. However, a growing literature shows that the SCE is not robust, a possible indication that numerical information is not always activated in an automatic fashion. In the present study, we tested the SCE via grasping by way of resolving the automaticity debate. We found results that challenge the robustness of the SCE and, consequently, the validity of the automaticity assumption. The SCE was absent when participants grasped the physically larger object of a pair of 3D wooden numerals. An SCE was still recorded when the participants perceptually indicated the general location of the larger object, but not when they grasped that object. These results highlight the importance of the sensory domain when considering the generality of a perceptual effect.
Collapse
|
8
|
|
9
|
Visual control of action directed toward two-dimensional objects relies on holistic processing of object shape. Psychon Bull Rev 2016; 22:1377-82. [PMID: 25665797 DOI: 10.3758/s13423-015-0803-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual perception relies on holistic processing of object shape. In contrast to perception, previous studies demonstrated that vision-for-action operates in a fundamentally different manner based on an analytical representation of objects. This notion was mainly supported by the absence of Garner interference for visually guided actions, compared to robust interference effects for perceptual estimations of the same objects. This study examines the nature of the representations that subserve visually guided actions toward two-dimensional (2D) stimuli. Based on recent results suggesting that actions directed toward 2D objects are mediated by different underlying processes compared to normal actions, we predicted that visually guided actions toward 2D stimuli would rely on perceptually driven holistic representations of object shape. To test this idea, we asked participants to grasp 2D rectangular objects presented on a computer monitor along their width while the values of the irrelevant dimension of length were either kept constant (baseline condition) or varied between trials (filtering condition). Worse performance in the filtering blocks is labeled Garner interference, which indicates holistic processing of object shape. Unlike in previous studies that used real objects, the results showed that grasping toward 2D objects produced a significant Garner interference effect, with more variable within-subject performance in the filtering compared to the baseline blocks. This finding suggests that visually guided actions directed toward 2D targets are mediated by different computations compared to visually guided actions directed toward real objects.
Collapse
|
10
|
Functional dissociation between action and perception of object shape in developmental visual object agnosia. Cortex 2016; 76:17-27. [DOI: 10.1016/j.cortex.2015.12.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 10/27/2015] [Accepted: 12/18/2015] [Indexed: 11/21/2022]
|
11
|
Whitwell RL, Ganel T, Byrne CM, Goodale MA. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a "natural" grasping task induces pantomime-like grasps. Front Hum Neurosci 2015; 9:216. [PMID: 25999834 PMCID: PMC4422037 DOI: 10.3389/fnhum.2015.00216] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2014] [Accepted: 04/02/2015] [Indexed: 11/13/2022] Open
Abstract
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
Collapse
Affiliation(s)
- Robert L Whitwell
- Graduate Program in Neuroscience, The University of Western Ontario London, ON, Canada ; Department of Psychology, The University of Western Ontario London, ON, Canada ; The Brain and Mind Institute, The University of Western Ontario London, ON, Canada
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev Beer-Sheva, Israel
| | - Caitlin M Byrne
- Department of Psychology, The University of Western Ontario London, ON, Canada
| | - Melvyn A Goodale
- Department of Psychology, The University of Western Ontario London, ON, Canada ; The Brain and Mind Institute, The University of Western Ontario London, ON, Canada ; Department of Physiology and Pharmacology, The University of Western Ontario London, ON, Canada
| |
Collapse
|
12
|
Leib R, Karniel A, Nisky I. The effect of force feedback delay on stiffness perception and grip force modulation during tool-mediated interaction with elastic force fields. J Neurophysiol 2015; 113:3076-89. [PMID: 25717155 PMCID: PMC4455557 DOI: 10.1152/jn.00229.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 02/23/2015] [Indexed: 11/22/2022] Open
Abstract
During interaction with objects, we form an internal representation of their mechanical properties. This representation is used for perception and for guiding actions, such as in precision grip, where grip force is modulated with the predicted load forces. In this study, we explored the relationship between grip force adjustment and perception of stiffness during interaction with linear elastic force fields. In a forced-choice paradigm, participants probed pairs of virtual force fields while grasping a force sensor that was attached to a haptic device. For each pair, they were asked which field had higher level of stiffness. In half of the pairs, the force feedback of one of the fields was delayed. Participants underestimated the stiffness of the delayed field relatively to the nondelayed, but their grip force characteristics were similar in both conditions. We analyzed the magnitude of the grip force and the lag between the grip force and the load force in the exploratory probing movements within each trial. Right before answering which force field had higher level of stiffness, both magnitude and lag were similar between delayed and nondelayed force fields. These results suggest that an accurate internal representation of environment stiffness and time delay was used for adjusting the grip force. However, this representation did not help in eliminating the bias in stiffness perception. We argue that during performance of a perceptual task that is based on proprioceptive feedback, separate neural mechanisms are responsible for perception and action-related computations in the brain.
Collapse
Affiliation(s)
- Raz Leib
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Karniel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Ilana Nisky
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
13
|
Namdar G, Avidan G, Ganel T. Effects of configural processing on the perceptual spatial resolution for face features. Cortex 2015; 72:115-123. [PMID: 25998751 DOI: 10.1016/j.cortex.2015.04.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 01/25/2015] [Accepted: 04/13/2015] [Indexed: 11/26/2022]
Abstract
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing.
Collapse
Affiliation(s)
- Gal Namdar
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Galia Avidan
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| |
Collapse
|
14
|
The highs and lows of object impossibility: effects of spatial frequency on holistic processing of impossible objects. Psychon Bull Rev 2014; 22:297-306. [DOI: 10.3758/s13423-014-0678-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
15
|
Goodale MA. How (and why) the visual control of action differs from visual perception. Proc Biol Sci 2014; 281:20140337. [PMID: 24789899 DOI: 10.1098/rspb.2014.0337] [Citation(s) in RCA: 97] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral 'perceptual' stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal 'action' stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal-one that we are unaware of-is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions.
Collapse
Affiliation(s)
- Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, , London, Ontario, Canada , N6A 5B7
| |
Collapse
|