1
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Lederman SJ, Klatzky RL, Abramowicz A, Salsman K, Kitada R, Hamilton C. Haptic Recognition of Static and Dynamic Expressions of Emotion in the Live Face. Psychol Sci 2016; 18:158-64. [PMID: 17425537 DOI: 10.1111/j.1467-9280.2007.01866.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
If humans can detect the wealth of tactile and haptic information potentially available in live facial expressions of emotion (FEEs), they should be capable of haptically recognizing the six universal expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) at levels well above chance. We tested this hypothesis in the experiments reported here. With minimal training, subjects' overall mean accuracy was 51% for static FEEs (Experiment 1) and 74% for dynamic FEEs (Experiment 2). All FEEs except static fear were successfully recognized above the chance level of 16.7%. Complementing these findings, overall confidence and information transmission were higher for dynamic than for corresponding static faces. Our performance measures (accuracy and confidence ratings, plus response latency in Experiment 2 only) confirmed that happiness, sadness, and surprise were all highly recognizable, and anger, disgust, and fear less so.
Collapse
|
3
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
4
|
Does my face FIT?: a face image task reveals structure and distortions of facial feature representation. PLoS One 2013; 8:e76805. [PMID: 24130790 PMCID: PMC3793930 DOI: 10.1371/journal.pone.0076805] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2013] [Accepted: 09/03/2013] [Indexed: 11/19/2022] Open
Abstract
Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.
Collapse
|
5
|
Fernandes AM, Albuquerque PB. Tactual perception: a review of experimental variables and procedures. Cogn Process 2012; 13:285-301. [PMID: 22669262 DOI: 10.1007/s10339-012-0443-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Accepted: 05/18/2012] [Indexed: 01/05/2023]
Abstract
This paper reviews the literature on tactual perception. Throughout this review, we will highlight some of the most relevant aspects in the touch literature: type of stimuli; type of participants; type of tactile exploration; and finally, the interaction between touch and other senses. Regarding type of stimuli, we will analyse studies with abstract stimuli such as vibrations, with two- and three-dimensional stimuli, and also concrete stimuli, considering the relation between familiar and unfamiliar stimuli and the haptic perception of faces. Under the "type of participants" topic, we separated studies with blind participants, studies with children and adults, and also performed an overview of sex differences in performance. The type of tactile exploration is explored considering conditions of active and passive touch, the relevance of movement in touch and the relation between haptic exploration and time. Finally, interactions between touch and vision, touch and smell and touch and taste are explored in the last topic. The review ends with an overall conclusion on the state of the art for the tactual perception literature. With this work, we intend to present an organised overview of the main variables in touch experiments, compiling aspects reported in the tactual literature, and attempting to provide both a summary of previous findings, and a guide to the design of future works on tactual perception and memory, through a presentation of implications from previous studies.
Collapse
|
6
|
Picard D, Jouffrais C, Lebaz S. Haptic Recognition of Emotions in Raised-Line Drawings by Congenitally Blind and Sighted Adults. IEEE TRANSACTIONS ON HAPTICS 2011; 4:67-71. [PMID: 26962956 DOI: 10.1109/toh.2010.58] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
15 sighted and 15 congenitally blind adults were to classify raised-line pictures of emotional faces through haptics. Whereas accuracy did not vary significantly between the two groups, the blind adults were faster at the task. These results suggest that raised-line pictures of emotional faces are intelligible to blind adults.
Collapse
|
7
|
Kitada R, Dijkerman HC, Soo G, Lederman SJ. Representing human hands haptically or visually from first-person versus third-person perspectives. Perception 2010; 39:236-54. [PMID: 20402245 DOI: 10.1068/p6535] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Humans can recognise human body parts haptically as well as visually. We employed a mental-rotation task to determine whether participants could adopt a third-person perspective when judging the laterality of life-like human hands. Female participants adopted either a first-person or a third-person perspective using vision (experiment 1) or haptics (experiment 2), with hands presented at various orientations within a horizontal plane. In the first-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the participant's upright orientation, regardless of modality. In the visual third-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the experimenter's upright orientation; in contrast, less than half of the participants produced this same inverted U-shaped response-time function haptically. In experiment 3, participants were explicitly instructed to adopt a third-person perspective haptically by mentally rotating the rubber hand to the experimenter's upright orientation. Most participants produced an inverted U-shaped function. Collectively, these results suggest that humans can accurately assume a third-person perspective when hands are explored haptically or visually. With less explicit instructions, however, the canonical orientation for hand representation may be more strongly influenced haptically than visually by body-based heuristics, and less easily modified by perspective instructions.
Collapse
Affiliation(s)
- Ryo Kitada
- Division of Cerebral Integration, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan.
| | | | | | | |
Collapse
|
8
|
McGregor TA, Klatzky RL, Hamilton C, Lederman SJ. Haptic Classification of Facial Identity in 2D Displays: Configural versus Feature-Based Processing. IEEE TRANSACTIONS ON HAPTICS 2010; 3:48-55. [PMID: 27788089 DOI: 10.1109/toh.2009.49] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants learned through feedback to haptically classify the identity of upright versus inverted versus scrambled faces depicted in simple 2D raised-line displays. We investigated whether identity classification would make use of a configural face representation, as is evidenced for vision and 3D haptic facial displays. Upright and scrambled faces produced equivalent accuracy, and both were identified more accurately than inverted faces. The mean magnitude of the haptic inversion effect for 2D facial identity was a sizable 26 percent, indicating that the upright orientation was ¿privileged¿ in the haptic representations of facial identity in these 2D displays, as with other facial modalities. However, given the effect of scrambling, we conclude that configural processing was not employed; rather, only local information about the features was used, the features being treated as oriented objects within a body-centered frame of reference. The results indicate a fundamental difference between haptic identification of 2D facial depictions and 3D faces, paralleling a corresponding difference in recognition of nonface objects.
Collapse
|
9
|
Abstract
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
Collapse
Affiliation(s)
- Andrew T Woods
- School of Psychology and Institute of Neuroscience, Lloyd Building, Trinity College Dublin, Dublin 2, Ireland
| | | | | |
Collapse
|
10
|
Dopjans L, Wallraven C, Bulthoff HH. Cross-Modal Transfer in Visual and Haptic Face Recognition. IEEE TRANSACTIONS ON HAPTICS 2009; 2:236-240. [PMID: 27788108 DOI: 10.1109/toh.2009.18] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We report four psychophysical experiments investigating cross-modal transfer in visual and haptic face recognition. We found surprisingly good haptic performance and cross-modal transfer for both modalities. Interestingly, transfer was asymmetric depending on which modality was learned first. These findings are discussed in relation to haptic object processing and face processing.
Collapse
|
11
|
Lederman SJ, Klatzky RL, Rennert-May E, Lee JH, Ng K, Hamilton C. Haptic Processing of Facial Expressions of Emotion in 2D Raised-Line Drawings. IEEE TRANSACTIONS ON HAPTICS 2008; 1:27-38. [PMID: 27788083 DOI: 10.1109/toh.2008.3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants haptically (vs. visually) classified universal facial expressions of emotion (FEEs) depicted in simple 2D raised-line displays. Experiments 1 and 2 established that haptic classification was well above chance; face-inversion effects further indicated that the upright orientation was privileged. Experiment 2 added a third condition in which the normal configuration of the upright features was spatially scrambled. Results confirmed that configural processing played a critical role, since upright FEEs were classified more accurately and confidently than either scrambled or inverted FEEs, which did not differ. Because accuracy in both scrambled and inverted conditions was above chance, feature processing also played a role, as confirmed by commonalities across confusions for upright, inverted, and scrambled faces. Experiment 3 required participants to visually and haptically assign emotional valence (positive/negative) and magnitude to upright and inverted 2-D FEE displays. While emotional magnitude could be assigned using either modality, haptic presentation led to more variable valence judgments. We also documented a new face-inversion effect for emotional valence visually, but not haptically. These results suggest emotions can be interpreted from 2-D displays presented haptically as well as visually; however, emotional impact is judged more reliably by vision than by touch. Potential applications of this work are also considered.
Collapse
|
12
|
Lacey S, Peters A, Sathian K. Cross-modal object recognition is viewpoint-independent. PLoS One 2007; 2:e890. [PMID: 17849019 PMCID: PMC1964535 DOI: 10.1371/journal.pone.0000890] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2007] [Accepted: 08/24/2007] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. METHODOLOGY/PRINCIPAL FINDINGS Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. CONCLUSIONS/SIGNIFICANCE The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, Georgia, United States of America
| | - Andrew Peters
- Department of Neurology, Emory University, Atlanta, Georgia, United States of America
| | - K. Sathian
- Department of Neurology, Emory University, Atlanta, Georgia, United States of America
- Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia, United States of America
- Department of Psychology, Emory University, Atlanta, Georgia, United States of America
- Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia, United States of America
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
13
|
Bülthoff I, Newell FN. The role of familiarity in the recognition of static and dynamic objects. PROGRESS IN BRAIN RESEARCH 2007; 154:315-25. [PMID: 17010720 DOI: 10.1016/s0079-6123(06)54017-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Although the perception of our world is experienced as effortless, the processes that underlie object recognition in the brain are often difficult to determine. In this chapter, we review the effects of familiarity on the recognition of moving or static objects. In particular, we concentrate on exemplar-level stimuli such as walking humans, unfamiliar objects and faces. We found that the perception of these objects can be affected by their familiarity; for example the learned view of an object or the learned dynamic pattern can influence object perception. Deviations in the viewpoint from the familiar viewpoint, or changes in the temporal pattern of the objects can result in some reduction of efficiency in the perception of the object. Furthermore, more efficient sex categorization and crossmodal matching were found for familiar than for unfamiliar faces. In sum, we find that our perceptual system is organized around familiar events and that perception is most efficient with these learned events.
Collapse
Affiliation(s)
- Isabelle Bülthoff
- Max-Planck-Institut für biologische Kybernetik, Spemannstrasse 38, D 72076 Tübingen, Germany.
| | | |
Collapse
|
14
|
Casey SJ, Newell FN. Are representations of unfamiliar faces independent of encoding modality? Neuropsychologia 2006; 45:506-13. [PMID: 16597451 DOI: 10.1016/j.neuropsychologia.2006.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2005] [Revised: 02/20/2006] [Accepted: 02/21/2006] [Indexed: 10/24/2022]
Abstract
It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.
Collapse
Affiliation(s)
- Sarah J Casey
- School of Psychology and Institute of Neuroscience, Trinity College, Dublin, Ireland
| | | |
Collapse
|
15
|
Kilgour AR, Kitada R, Servos P, James TW, Lederman SJ. Haptic face identification activates ventral occipital and temporal areas: An fMRI study. Brain Cogn 2005; 59:246-57. [PMID: 16157435 DOI: 10.1016/j.bandc.2005.07.004] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2004] [Revised: 07/06/2005] [Accepted: 07/17/2005] [Indexed: 11/21/2022]
Abstract
Many studies in visual face recognition have supported a special role for the right fusiform gyrus. Despite the fact that faces can also be recognized haptically, little is known about the neural correlates of haptic face recognition. In the current fMRI study, neurologically intact participants were intensively trained to identify specific facemasks (molded from live faces) and specific control objects. When these stimuli were presented in the scanner, facemasks activated left fusiform and right hippocampal/parahippocampal areas (and other regions) more than control objects, whereas the latter produced no activity greater than the facemasks. We conclude that these ventral occipital and temporal areas may play an important role in the haptic identification of faces at the subordinate level. We further speculate that left fusiform gyrus may be recruited more for facemasks than for control objects because of the increased need for sequential processing by the haptic system.
Collapse
Affiliation(s)
- Andrea R Kilgour
- Department of Psychology, Queen's University, Kingston, Ont., Canada K7L 3N6
| | | | | | | | | |
Collapse
|