1
|
Tao W, Xu Z, Zhao D, Wang C, Wang Q, Britt N, Tao X, Sun HJ. Inversion Effect of Hand Postures: Effect of Visual Experience Over Long and Short Term. Iperception 2022; 13:20416695221105911. [PMID: 35782827 PMCID: PMC9243484 DOI: 10.1177/20416695221105911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 05/22/2022] [Indexed: 11/29/2022] Open
Abstract
Some researchers argue that holistic processing is unique to face recognition supported
by the face inversion effect. However, findings such as the body inversion effect
challenge the face processing-specificity hypothesis, thus supporting the expertise
hypothesis. Few studies have explored a possible hand inversion effect which could involve
special processing similar to the face and body. We conducted four experiments to
investigate the time course and flexibility of the hand posture inversion effect. We
utilized a same/different discrimination task (Experiments 1 and 2), an identification
task (Experiment 3), and a training paradigm involving the exposure of different hand
orientations (Experiment 4). The results show the hand posture inversion effect (with
fingers up as upright orientation) was not initially observed during the early phase of
testing, but occurred in later phases. This suggests that both lifetime experience and
recent exposure affect the hand posture inversion effect. We also found the hand posture
inversion effect, once established, was stable across days and remained consistent across
different tasks. In addition, the hand posture inversion effect for specific orientations
could be obtained with short-term training of a given orientation, indicating the
cognitive process is flexible.
Collapse
Affiliation(s)
- Weidong Tao
- Department of Psychology, School of Teacher Education, Huzhou University, China; The Key Laboratory of Brain Science and Children's Learning of Huzhou, Huzhou University, Huzhou, China
| | - Zhen Xu
- Department of Psychology, School of Teacher Education, Huzhou University, China
| | - Dongchi Zhao
- Department of Psychology, School of Teacher Education, Huzhou University, China
| | - Chao Wang
- Department of Psychology, School of Teacher Education, Huzhou University, China
| | - QiangQiang Wang
- Department of Psychology, School of Teacher Education, Huzhou University, China
| | - Noah Britt
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Canada
| | - Xaoli Tao
- Department of Psychology, School of Teacher Education, Huzhou University, China
| | - Hong-jin Sun
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Canada
| |
Collapse
|
2
|
Lederman SJ, Klatzky RL, Abramowicz A, Salsman K, Kitada R, Hamilton C. Haptic Recognition of Static and Dynamic Expressions of Emotion in the Live Face. Psychol Sci 2016; 18:158-64. [PMID: 17425537 DOI: 10.1111/j.1467-9280.2007.01866.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
If humans can detect the wealth of tactile and haptic information potentially available in live facial expressions of emotion (FEEs), they should be capable of haptically recognizing the six universal expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) at levels well above chance. We tested this hypothesis in the experiments reported here. With minimal training, subjects' overall mean accuracy was 51% for static FEEs (Experiment 1) and 74% for dynamic FEEs (Experiment 2). All FEEs except static fear were successfully recognized above the chance level of 16.7%. Complementing these findings, overall confidence and information transmission were higher for dynamic than for corresponding static faces. Our performance measures (accuracy and confidence ratings, plus response latency in Experiment 2 only) confirmed that happiness, sadness, and surprise were all highly recognizable, and anger, disgust, and fear less so.
Collapse
|
3
|
Abstract
The idea that faces are represented within a structured face space (Valentine Quarterly Journal of Experimental Psychology 43: 161-204, 1991) has gained considerable experimental support, from both physiological and perceptual studies. Recent work has also shown that faces can even be recognized haptically-that is, from touch alone. Although some evidence favors congruent processing strategies in the visual and haptic processing of faces, the question of how similar the two modalities are in terms of face processing remains open. Here, this question was addressed by asking whether there is evidence for a haptic face space, and if so, how it compares to visual face space. For this, a physical face space was created, consisting of six laser-scanned individual faces, their morphed average, 50%-morphs between two individual faces, as well as 50%-morphs of the individual faces with the average, resulting in a set of 19 faces. Participants then rated either the visual or haptic pairwise similarity of the tangible 3-D face shapes. Multidimensional scaling analyses showed that both modalities extracted perceptual spaces that conformed to critical predictions of the face space framework, hence providing support for similar processing of complex face shapes in haptics and vision. Despite the overall similarities, however, systematic differences also emerged between the visual and haptic data. These differences are discussed in the context of face processing and complex-shape processing in vision and haptics.
Collapse
|
4
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
5
|
Matsumiya K. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face. Psychol Sci 2013; 24:2088-98. [PMID: 24002886 DOI: 10.1177/0956797613486981] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Collapse
|
6
|
Haptic perception and body representation in lateral and medial occipito-temporal cortices. Neuropsychologia 2011; 49:821-829. [DOI: 10.1016/j.neuropsychologia.2011.01.034] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2010] [Revised: 01/17/2011] [Accepted: 01/18/2011] [Indexed: 11/19/2022]
|
7
|
Picard D, Jouffrais C, Lebaz S. Haptic Recognition of Emotions in Raised-Line Drawings by Congenitally Blind and Sighted Adults. IEEE TRANSACTIONS ON HAPTICS 2011; 4:67-71. [PMID: 26962956 DOI: 10.1109/toh.2010.58] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
15 sighted and 15 congenitally blind adults were to classify raised-line pictures of emotional faces through haptics. Whereas accuracy did not vary significantly between the two groups, the blind adults were faster at the task. These results suggest that raised-line pictures of emotional faces are intelligible to blind adults.
Collapse
|
8
|
Irrelevant visual faces influence haptic identification of facial expressions of emotion. Atten Percept Psychophys 2010; 73:521-30. [DOI: 10.3758/s13414-010-0038-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Kitada R, Dijkerman HC, Soo G, Lederman SJ. Representing human hands haptically or visually from first-person versus third-person perspectives. Perception 2010; 39:236-54. [PMID: 20402245 DOI: 10.1068/p6535] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Humans can recognise human body parts haptically as well as visually. We employed a mental-rotation task to determine whether participants could adopt a third-person perspective when judging the laterality of life-like human hands. Female participants adopted either a first-person or a third-person perspective using vision (experiment 1) or haptics (experiment 2), with hands presented at various orientations within a horizontal plane. In the first-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the participant's upright orientation, regardless of modality. In the visual third-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the experimenter's upright orientation; in contrast, less than half of the participants produced this same inverted U-shaped response-time function haptically. In experiment 3, participants were explicitly instructed to adopt a third-person perspective haptically by mentally rotating the rubber hand to the experimenter's upright orientation. Most participants produced an inverted U-shaped function. Collectively, these results suggest that humans can accurately assume a third-person perspective when hands are explored haptically or visually. With less explicit instructions, however, the canonical orientation for hand representation may be more strongly influenced haptically than visually by body-based heuristics, and less easily modified by perspective instructions.
Collapse
Affiliation(s)
- Ryo Kitada
- Division of Cerebral Integration, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan.
| | | | | | | |
Collapse
|
10
|
McGregor TA, Klatzky RL, Hamilton C, Lederman SJ. Haptic Classification of Facial Identity in 2D Displays: Configural versus Feature-Based Processing. IEEE TRANSACTIONS ON HAPTICS 2010; 3:48-55. [PMID: 27788089 DOI: 10.1109/toh.2009.49] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants learned through feedback to haptically classify the identity of upright versus inverted versus scrambled faces depicted in simple 2D raised-line displays. We investigated whether identity classification would make use of a configural face representation, as is evidenced for vision and 3D haptic facial displays. Upright and scrambled faces produced equivalent accuracy, and both were identified more accurately than inverted faces. The mean magnitude of the haptic inversion effect for 2D facial identity was a sizable 26 percent, indicating that the upright orientation was ¿privileged¿ in the haptic representations of facial identity in these 2D displays, as with other facial modalities. However, given the effect of scrambling, we conclude that configural processing was not employed; rather, only local information about the features was used, the features being treated as oriented objects within a body-centered frame of reference. The results indicate a fundamental difference between haptic identification of 2D facial depictions and 3D faces, paralleling a corresponding difference in recognition of nonface objects.
Collapse
|
11
|
Abstract
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
Collapse
Affiliation(s)
- Andrew T Woods
- School of Psychology and Institute of Neuroscience, Lloyd Building, Trinity College Dublin, Dublin 2, Ireland
| | | | | |
Collapse
|
12
|
Dopjans L, Wallraven C, Bulthoff HH. Cross-Modal Transfer in Visual and Haptic Face Recognition. IEEE TRANSACTIONS ON HAPTICS 2009; 2:236-240. [PMID: 27788108 DOI: 10.1109/toh.2009.18] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We report four psychophysical experiments investigating cross-modal transfer in visual and haptic face recognition. We found surprisingly good haptic performance and cross-modal transfer for both modalities. Interestingly, transfer was asymmetric depending on which modality was learned first. These findings are discussed in relation to haptic object processing and face processing.
Collapse
|
13
|
Lederman SJ, Klatzky RL, Rennert-May E, Lee JH, Ng K, Hamilton C. Haptic Processing of Facial Expressions of Emotion in 2D Raised-Line Drawings. IEEE TRANSACTIONS ON HAPTICS 2008; 1:27-38. [PMID: 27788083 DOI: 10.1109/toh.2008.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants haptically (vs. visually) classified universal facial expressions of emotion (FEEs) depicted in simple 2D raised-line displays. Experiments 1 and 2 established that haptic classification was well above chance; face-inversion effects further indicated that the upright orientation was privileged. Experiment 2 added a third condition in which the normal configuration of the upright features was spatially scrambled. Results confirmed that configural processing played a critical role, since upright FEEs were classified more accurately and confidently than either scrambled or inverted FEEs, which did not differ. Because accuracy in both scrambled and inverted conditions was above chance, feature processing also played a role, as confirmed by commonalities across confusions for upright, inverted, and scrambled faces. Experiment 3 required participants to visually and haptically assign emotional valence (positive/negative) and magnitude to upright and inverted 2-D FEE displays. While emotional magnitude could be assigned using either modality, haptic presentation led to more variable valence judgments. We also documented a new face-inversion effect for emotional valence visually, but not haptically. These results suggest emotions can be interpreted from 2-D displays presented haptically as well as visually; however, emotional impact is judged more reliably by vision than by touch. Potential applications of this work are also considered.
Collapse
|
14
|
Husk JS, Bennett PJ, Sekuler AB. Inverting houses and textures: investigating the characteristics of learned inversion effects. Vision Res 2007; 47:3350-9. [PMID: 17988706 DOI: 10.1016/j.visres.2007.09.017] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2007] [Revised: 09/17/2007] [Accepted: 09/19/2007] [Indexed: 11/15/2022]
Abstract
Faces, more than other objects, are identified more accurately when upright than inverted. This inversion effect may be linked to differences in expertise. Here, we explore how stimulus characteristics and expertise interact to determine the magnitude of inversion effects. Observers were trained to identify houses or textures. Inversion effects were not found with either stimulus before training, but were found following 5 days of practice. Additionally, the learning-induced inversion effects showed partial transfer to novel exemplars. Although similar amounts of learning were observed with both types of stimuli, inversion effects were significantly larger for textures. Our results suggest that the size of the inversion effect is not a reliable index of face-specific processing.
Collapse
Affiliation(s)
- Jesse S Husk
- McMaster University, Department of Psychology, Neuroscience, and Behaviour, 1280 Main St. West, Hamilton, Ont., Canada L8S 4K1.
| | | | | |
Collapse
|