1
|
Perceptual Biases as the Side Effect of a Multisensory Adaptive System: Insights from Verticality and Self-Motion Perception. Vision (Basel) 2022; 6:vision6030053. [PMID: 36136746 PMCID: PMC9502132 DOI: 10.3390/vision6030053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 07/22/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Perceptual biases can be interpreted as adverse consequences of optimal processes which otherwise improve system performance. The review presented here focuses on the investigation of inaccuracies in multisensory perception by focusing on the perception of verticality and self-motion, where the vestibular sensory modality has a prominent role. Perception of verticality indicates how the system processes gravity. Thus, it represents an indirect measurement of vestibular perception. Head tilts can lead to biases in perceived verticality, interpreted as the influence of a vestibular prior set at the most common orientation relative to gravity (i.e., upright), useful for improving precision when upright (e.g., fall avoidance). Studies on the perception of verticality across development and in the presence of blindness show that prior acquisition is mediated by visual experience, thus unveiling the fundamental role of visuo-vestibular interconnections across development. Such multisensory interactions can be behaviorally tested with cross-modal aftereffect paradigms which test whether adaptation in one sensory modality induces biases in another, eventually revealing an interconnection between the tested sensory modalities. Such phenomena indicate the presence of multisensory neural mechanisms that constantly function to calibrate self-motion dedicated sensory modalities with each other as well as with the environment. Thus, biases in vestibular perception reveal how the brain optimally adapts to environmental requests, such as spatial navigation and steady changes in the surroundings.
Collapse
|
2
|
Parker AN, Wallis GM, Obergrussberger R, Siebeck UE. Categorical face perception in fish: How a fish brain warps reality to dissociate "same" from "different". J Comp Neurol 2020; 528:2919-2928. [PMID: 32406088 DOI: 10.1002/cne.24947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Revised: 04/14/2020] [Accepted: 04/28/2020] [Indexed: 11/07/2022]
Abstract
Categorical perception (CP) is the phenomenon by which a smoothly varying stimulus property undergoes a nonlinear transformation during processing in the brain. Consequently, the stimuli are perceived as belonging to distinct categories separated by a sharp boundary. Originally thought to be largely innate, the discovery of CP in tasks such as novel image discrimination has piqued the interest of cognitive scientists because it provides compelling evidence that learning can shape a category's perceptual boundaries. CP has been particularly closely studied in human face perception. In nonprimates, there is evidence for CP for sound and color discrimination, but not for image or face discrimination. Here, we investigate the potential for learned CP in a lower vertebrate, the damselfish Pomacentrus amboinensis. Specifically, we tested whether the ability of these fish to discriminate complex facial patterns tracked categorical rather than metric differences in the stimuli. We first trained the fish to discriminate sets of two facial patterns. Next, we morphed between these patterns and determined the just noticeable difference (JND) between a morph and original image. Finally, we tested for CP by analyzing the discrimination ability of the fish for pairs of JND stimuli along the spectrum of morphs between two original images. Discrimination performance was significant for the image pair straddling the boundary between categories, and chance for equivalent stimulus pairs on either side, thus producing the classic "category boundary" effect. Our results reveal how perception can be influenced in a top-down manner even in the absence of a visual cortex.
Collapse
Affiliation(s)
- Amira N Parker
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Guy M Wallis
- School of Human Movement and Nutrition Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Rainer Obergrussberger
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Ulrike E Siebeck
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
3
|
Abstract
In a glance, observers can evaluate gist characteristics from crowds of faces, such as the average emotional tenor or the average family resemblance. Prior research suggests that high-level ensemble percepts rely on holistic and viewpoint-invariant information. However, it is also possible that feature-based analysis was sufficient to yield successful ensemble percepts in many situations. To confirm that ensemble percepts can be extracted holistically, we asked observers to report the average emotional valence of Mooney face crowds. Mooney faces are two-tone, shadow-defined images that cannot be recognized in a part-based manner. To recognize features in a Mooney face, one must first recognize the image as a face by processing it holistically. Across experiments, we demonstrated that observers successfully extracted the average emotional valence from crowds that were spatially distributed or viewed in a rapid temporal sequence. In a subsequent set of experiments, we maximized holistic processing by including only those Mooney faces that were difficult to recognize when inverted. Under these conditions, participants remained highly sensitive to the average emotional valence of Mooney face crowds. Taken together, these experiments provide evidence that ensemble perception can operate selectively on holistic representations of human faces, even when feature-based information is not readily available.
Collapse
|
4
|
Abstract
The accurate perception of human crowds is integral to social understanding and interaction. Previous studies have shown that observers are sensitive to several crowd characteristics such as average facial expression, gender, identity, joint attention, and heading direction. In two experiments, we examined ensemble perception of crowd speed using standard point-light walkers (PLW). Participants were asked to estimate the average speed of a crowd consisting of 12 figures moving at different speeds. In Experiment 1, trials of intact PLWs alternated with trials of scrambled PLWs with a viewing duration of 3 seconds. We found that ensemble processing of crowd speed could rely on local motion alone, although a globally intact configuration enhanced performance. In Experiment 2, observers estimated the average speed of intact-PLW crowds that were displayed at reduced viewing durations across five blocks of trials (between 2500 ms and 500 ms). Estimation of fast crowds was precise and accurate regardless of viewing duration, and we estimated that three to four walkers could still be integrated at 500 ms. For slow crowds, we found a systematic deterioration in performance as viewing time reduced, and performance at 500 ms could not be distinguished from a single-walker response strategy. Overall, our results suggest that rapid and accurate ensemble perception of crowd speed is possible, although sensitive to the precise speed range examined.
Collapse
|
5
|
Abstract
Ensemble perception refers to awareness of average properties, e.g. size, of “noisy” elements that often comprise visual arrays in natural scenes. Here, we asked how ensemble perception might be influenced when some but not all array elements are associated with monetary reward. Previous studies show that reward associations can speed object processing, facilitate selection, and enhance working-memory maintenance, suggesting they may bias ensemble judgments. To investigate, participants reported the average element size of brief arrays of different-sized circles. In the learning phase, all circles had the same color, but different colors produced high or low performance-contingent rewards. Then, in an unrewarded test phase, arrays comprised three spatially inter-mixed subsets, each with a different color, including the high-reward color. In different trials, the mean size of the subset with the high-reward color was smaller, larger, or the same as the ensemble mean. Ensemble size estimates were significantly biased by the high-reward-associated subset, showing that value associations modulate ensemble perception. In the test phase of a second experiment, a pattern mask appeared immediately after array presentation to limit top-down processing. Not only was value-biasing eliminated, ensemble accuracy improved, suggesting that value associations distort consciously available ensemble representation via late high-level processing.
Collapse
|
6
|
Kuang S. Dissociating Sensory and Cognitive Biases in Human Perceptual Decision-Making: A Re-evaluation of Evidence From Reference Repulsion. Front Hum Neurosci 2019; 13:409. [PMID: 31803038 PMCID: PMC6873209 DOI: 10.3389/fnhum.2019.00409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 11/04/2019] [Indexed: 11/13/2022] Open
Abstract
Our perception of the world is governed by a combination of bottom-up sensory and top-down cognitive processes. This often begs the question whether a perceptual phenomenon originates from sensory or cognitive processes in the brain. For instance, reference repulsion, a compelling visual illusion in which the subjective estimates about the direction of a motion stimulus are biased away from a reference boundary, is previously thought to be originated at the sensory level. Recent studies, however, suggest that the misperception is not sensory in nature but rather reflects post-perceptual cognitive biases. Here I challenge the post-perceptual interpretations on both empirical and conceptual grounds. I argue that these new findings are not incompatible with the sensory account and can be more parsimoniously explained as reflecting the consequences of motion representations in different reference frames. Finally, I will propose one concrete experiment with testable predictions to shed more insights on the sensory vs. cognitive nature of this visual illusion.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
7
|
Hasantash M, Lafer-Sousa R, Afraz A, Conway BR. Paradoxical impact of memory on color appearance of faces. Nat Commun 2019; 10:3010. [PMID: 31285438 PMCID: PMC6614425 DOI: 10.1038/s41467-019-10073-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 03/14/2019] [Indexed: 11/19/2022] Open
Abstract
What is color vision for? Here we compared the extent to which memory modulates color appearance of objects and faces. Participants matched the colors of stimuli illuminated by low-pressure sodium light, which renders scenes monochromatic. Matches for fruit were not predicted by stimulus identity. In contrast, matches for faces were predictable, but surprising: faces appeared green and looked sick. The paradoxical face-color percept could be explained by a Bayesian observer model constrained by efficient coding. The color-matching data suggest that the face-color prior is established by visual signals arising from the recently evolved L-M cone system, not the older S-cone channel. Taken together, the results show that when retinal mechanisms of color vision are impaired, the impact of memory on color perception is greatest for face color, supporting the idea that trichromatic color plays an important role in social communication. What is the function of color vision? Here, the authors show that when retinal mechanisms of color are impaired, memory has a paradoxical impact on color appearance that is selective for faces, providing evidence that color contributes to face encoding and social communication.
Collapse
Affiliation(s)
- Maryam Hasantash
- Institute for Research in Fundamental Sciences, Tehran, P.O. Box 19395-5746, Iran
| | - Rosa Lafer-Sousa
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA
| | - Arash Afraz
- National Institute of Mental Health, NIH, Bethesda, MD, 20892, USA
| | - Bevil R Conway
- National Institute of Mental Health, NIH, Bethesda, MD, 20892, USA. .,National Eye Institute, NIH, Bethesda, MD, 20892, USA.
| |
Collapse
|
8
|
Abstract
Perception of a stimulus can be characterized by two fundamental psychophysical measures: how well the stimulus can be discriminated from similar ones (discrimination threshold) and how strongly the perceived stimulus value deviates on average from the true stimulus value (perceptual bias). We demonstrate that perceptual bias and discriminability, as functions of the stimulus value, follow a surprisingly simple mathematical relation. The relation, which is derived from a theory combining optimal encoding and decoding, is well supported by a wide range of reported psychophysical data including perceptual changes induced by contextual modulation. The large empirical support indicates that the proposed relation may represent a psychophysical law in human perception. Our results imply that the computational processes of sensory encoding and perceptual decoding are matched and optimized based on identical assumptions about the statistical structure of the sensory environment.
Collapse
|
9
|
Object-substitution masking weakens but does not eliminate shape interactions. Atten Percept Psychophys 2017; 79:2179-2189. [PMID: 28718174 DOI: 10.3758/s13414-017-1381-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
At any moment, some objects in the environment are seen clearly, whereas others go unnoticed. Whether or not these gaps in awareness are actually problematic may depend on the extent that information about unseen objects is lost. Determining when and how visual awareness and visual processing become linked is thus of great importance. Previous research using object-substitution masking (OSM) demonstrated that relatively simple visual features, such as size or orientation, are still processed even when they are not visible. Yet this does not appear to be the case for more complex features like faces. This suggests that, during OSM, disruptions of visual processing and awareness may tend to co-occur beginning at some intermediate stage along the ventral pathway. We tested this hypothesis by evaluating the extent to which OSM disrupted the perception and processing of two-dimensional objects. Specifically, we evaluated whether an unseen shape's aspect ratio would influence the appearance of another shape that was briefly visible nearby. As expected, the aspect ratios of two shapes appeared to be more similar to each other when both were visible. This averaging effect was weakened, but not eliminated, when one ellipse in each pair received OSM. These shape interactions persisted even when one ellipse from each pair was invisible. When combined with previous work, these results suggest that during object-substitution masking, disruptions of visual processing tend to strengthen with increases in stimulus complexity, becoming more tightly bound to the mechanisms of visual awareness at intermediate stages of visual analysis.
Collapse
|
10
|
EEG frequency tagging dissociates between neural processing of motion synchrony and human quality of multiple point-light dancers. Sci Rep 2017; 7:44012. [PMID: 28272421 PMCID: PMC5341056 DOI: 10.1038/srep44012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 01/19/2017] [Indexed: 01/23/2023] Open
Abstract
Do we perceive a group of dancers moving in synchrony differently from a group of drones flying in-sync? The brain has dedicated networks for perception of coherent motion and interacting human bodies. However, it is unclear to what extent the underlying neural mechanisms overlap. Here we delineate these mechanisms by independently manipulating the degree of motion synchrony and the humanoid quality of multiple point-light displays (PLDs). Four PLDs moving within a group were changing contrast in cycles of fixed frequencies, which permits the identification of the neural processes that are tagged by these frequencies. In the frequency spectrum of the steady-state EEG we found two emergent frequency components, which signified distinct levels of interactions between PLDs. The first component was associated with motion synchrony, the second with the human quality of the moving items. These findings indicate that visual processing of synchronously moving dancers involves two distinct neural mechanisms: one for the perception of a group of items moving in synchrony and one for the perception of a group of moving items with human quality. We propose that these mechanisms underlie high-level perception of social interactions.
Collapse
|
11
|
Zhang X, Xu Q, Jiang Y, Wang Y. The interaction of perceptual biases in bistable perception. Sci Rep 2017; 7:42018. [PMID: 28165061 PMCID: PMC5292733 DOI: 10.1038/srep42018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 01/06/2017] [Indexed: 11/09/2022] Open
Abstract
When viewing ambiguous stimuli, people tend to perceive some interpretations more frequently than others. Such perceptual biases impose various types of constraints on visual perception, and accordingly, have been assumed to serve distinct adaptive functions. Here we demonstrated the interaction of two functionally distinct biases in bistable biological motion perception, one regulating perception based on the statistics of the environment – the viewing-from-above (VFA) bias, and the other with the potential to reduce costly errors resulting from perceptual inference – the facing-the-viewer (FTV) bias. When compatible, the two biases reinforced each other to enhance the bias strength and induced less perceptual reversals relative to when they were in conflict. Whereas in the conflicting condition, the biases competed with each other, with the dominant percept varying with visual cues that modulate the two biases separately in opposite directions. Crucially, the way the two biases interact does not depend on the dominant bias at the individual level, and cannot be accounted for by a single bias alone. These findings provide compelling evidence that humans robustly integrate biases with different adaptive functions in visual perception. It may be evolutionarily advantageous to dynamically reweight diverse biases in the sensory context to resolve perceptual ambiguity.
Collapse
Affiliation(s)
- Xue Zhang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, P. R. China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, P. R. China
| | - Qian Xu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, P. R. China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, P. R. China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, P. R. China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, P. R. China
| | - Ying Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, P. R. China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, P. R. China
| |
Collapse
|
12
|
Sweeny TD, Whitney D. The center of attention: Metamers, sensitivity, and bias in the emergent perception of gaze. Vision Res 2017; 131:67-74. [PMID: 28057579 DOI: 10.1016/j.visres.2016.10.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 10/17/2016] [Accepted: 10/19/2016] [Indexed: 10/20/2022]
Abstract
A person's gaze reveals much about their focus of attention and intentions. Sensitive perception of gaze is thus highly relevant for social interaction, especially when it is directed toward the viewer. Yet observers also tend to overestimate the likelihood that gaze is directed toward them. How might the visual system balance these competing goals, maximizing sensitivity for discriminating gazes that are relatively direct, while at the same time allowing many gazes to appear as if they look toward the viewer? Perceiving gaze is an emergent visual process that involves integrating information from the eyes with the rotation of the head. Here, we examined whether the visual system leverages emergent representation to balance these competing goals. We measured perceived gaze for a large range of pupil and head combinations and found that head rotation has a nonlinear influence on a person's apparent direction of looking, especially when pupil rotations are relatively direct. These perceptual distortions could serve to expand representational space and thereby enhance discriminability of gazes that are relatively direct. We also found that the emergent perception of gaze supports an abundance of direct gaze metamers-different combinations of head and pupil rotations that combine to generate the appearance of gaze directed toward the observer. Our results thus demonstrate a way in which the visual system flexibly integrates information from facial features to optimize social perception. Many gazes can be made to look toward you, yet similar gazes need not appear alike.
Collapse
Affiliation(s)
| | - David Whitney
- Vision Science Group, University of California - Berkeley, United States; Department of Psychology, University of California - Berkeley, United States
| |
Collapse
|
13
|
Sweeny TD, Wurnitsch N, Gopnik A, Whitney D. Ensemble perception of size in 4-5-year-old children. Dev Sci 2015; 18:556-68. [PMID: 25442844 PMCID: PMC5282927 DOI: 10.1111/desc.12239] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 07/14/2014] [Indexed: 11/29/2022]
Abstract
Groups of objects are nearly everywhere we look. Adults can perceive and understand the 'gist' of multiple objects at once, engaging ensemble-coding mechanisms that summarize a group's overall appearance. Are these group-perception mechanisms in place early in childhood? Here, we provide the first evidence that 4-5-year-old children use ensemble coding to perceive the average size of a group of objects. Children viewed a pair of trees, with each containing a group of differently sized oranges. We found that, in order to determine which tree had the larger oranges overall, children integrated the sizes of multiple oranges into ensemble representations. This pooling occurred rapidly, and it occurred despite conflicting information from numerosity, continuous extent, density, and contrast. An ideal observer analysis showed that although children's integration mechanisms are sensitive, they are not yet as efficient as adults'. Overall, our results provide a new insight into the way children see and understand the environment, and they illustrate the fundamental nature of ensemble coding in visual perception.
Collapse
Affiliation(s)
| | | | - Alison Gopnik
- Department of Psychology, University of California – Berkeley
| | - David Whitney
- Department of Psychology, University of California – Berkeley
- Vision Science Group, University of California – Berkeley
| |
Collapse
|
14
|
Sweeny TD, Whitney D. Perceiving crowd attention: ensemble perception of a crowd's gaze. Psychol Sci 2014; 25:1903-13. [PMID: 25125428 DOI: 10.1177/0956797614544510] [Citation(s) in RCA: 88] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In nearly every interpersonal encounter, people readily gather socio-visual cues to guide their behavior. Intriguingly, social information is most effective in directing behavior when it is perceived in crowds. For example, the shared gaze of a crowd is more likely to direct attention than is a single person's gaze. Are people equipped with mechanisms to perceive a crowd's gaze as an ensemble? Here, we provide the first evidence that the visual system extracts a summary representation of a crowd's attention; observers rapidly pooled information from multiple crowd members to perceive the direction of a group's collective gaze. This pooling occurred in high-level stages of visual processing, with gaze perceived as a global-level combination of information from head and pupil rotation. These findings reveal an important and efficient mechanism for assessing crowd gaze, which could underlie the ability to perceive group intentions, orchestrate joint attention, and guide behavior.
Collapse
Affiliation(s)
| | - David Whitney
- Department of Psychology, University of California, Berkeley Vision Science Group, University of California, Berkeley
| |
Collapse
|
15
|
Szpiro SFA, Spering M, Carrasco M. Perceptual learning modifies untrained pursuit eye movements. J Vis 2014; 14:8. [PMID: 25002412 DOI: 10.1167/14.8.8] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response.
Collapse
Affiliation(s)
- Sarit F A Szpiro
- Department of Psychology, New York University, New York, NY, USA
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, CanadaBrain Research Centre, University of British Columbia, Vancouver, Canada
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USACenter for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
16
|
Yang X, Cai P, Jiang Y. Effects of walker gender and observer gender on biological motion walking direction discrimination. Psych J 2014; 3:169-76. [DOI: 10.1002/pchj.53] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 12/17/2013] [Indexed: 11/05/2022]
Affiliation(s)
- Xiaoying Yang
- State Key Laboratory of Brain and Cognitive Science; Institute of Psychology; Chinese Academy of Sciences; Beijing China
- University of Chinese Academy of Sciences; Beijing China
| | - Peng Cai
- State Key Laboratory of Brain and Cognitive Science; Institute of Psychology; Chinese Academy of Sciences; Beijing China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science; Institute of Psychology; Chinese Academy of Sciences; Beijing China
| |
Collapse
|
17
|
Wang Y, Jiang Y. Integration of 3D structure from disparity into biological motion perception independent of depth awareness. PLoS One 2014; 9:e89238. [PMID: 24586622 PMCID: PMC3931706 DOI: 10.1371/journal.pone.0089238] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Accepted: 01/17/2014] [Indexed: 11/19/2022] Open
Abstract
Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.
Collapse
Affiliation(s)
- Ying Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- * E-mail:
| |
Collapse
|
18
|
Sweeny TD, Wurnitsch N, Gopnik A, Whitney D. Sensitive perception of a person's direction of walking by 4-year-old children. Dev Psychol 2013; 49:2120-4. [PMID: 23356524 PMCID: PMC4305363 DOI: 10.1037/a0031714] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Watch any crowded intersection, and you will see how adept people are at reading the subtle movements of one another. While adults can readily discriminate small differences in the direction of a moving person, it is unclear if this sensitivity is in place early in development. Here, we present evidence that 4-year-old children are sensitive to small differences in a person's direction of walking (∼7°) far beyond what has been previously shown. This sensitivity only occurred for perception of an upright walker, consistent with the recruitment of high-level visual areas. Even at 4 years of age, children's sensitivity approached that of adults'. This suggests that the sophisticated mechanisms adults use to perceive a person's direction of movement are in place and developing early in childhood. Although the neural mechanisms for perceiving biological motion develop slowly, they are refined enough by age 4 to support subtle perceptual judgments of heading. These judgments may be useful for predicting a person's future location or even their intentions and goals.
Collapse
|
19
|
Wang L, Yang X, Shi J, Jiang Y. The feet have it: local biological motion cues trigger reflexive attentional orienting in the brain. Neuroimage 2013; 84:217-24. [PMID: 23994124 DOI: 10.1016/j.neuroimage.2013.08.041] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Revised: 07/25/2013] [Accepted: 08/18/2013] [Indexed: 11/25/2022] Open
Abstract
Most vertebrates, humans included, have a primitive visual system extremely sensitive to the motion of biological entities. Most previous studies have examined the global aspects of biological motion perception, but local motion processing has received much less attention. Here we provide direct psychophysical and electrophysiological evidence that human observers are intrinsically tuned to the characteristics of local biological motion cues independent of global configuration. Using a modified central cueing paradigm, we show that observers involuntarily orient their attention towards the walking direction of feet motion sequences, which triggers an early directing attention negativity (EDAN) in the occipito-parietal region 100-160ms after the stimulus onset. Notably, such effects are sensitive to the orientation of the local cues and are independent of whether the observers are aware of the biological nature of the motion. Our findings unambiguously demonstrate the automatic processing of local biological motion without explicit recognition. More importantly, with the discovery that local biological motion signals modulate attention, we highlight the functional importance of such processing in the brain.
Collapse
Affiliation(s)
- Li Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, PR China
| | | | | | | |
Collapse
|
20
|
Schouten B, Davila A, Verfaillie K. Further explorations of the facing bias in biological motion perception: perspective cues, observer sex, and response times. PLoS One 2013; 8:e56978. [PMID: 23468898 PMCID: PMC3584127 DOI: 10.1371/journal.pone.0056978] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2012] [Accepted: 01/16/2013] [Indexed: 11/28/2022] Open
Abstract
The human visual system has evolved to be highly sensitive to visual information about other persons and their movements as is illustrated by the effortless perception of point-light figures or ‘biological motion’. When presented orthographically, a point-light walker is interpreted in two anatomically plausible ways: As ‘facing the viewer’ or as ‘facing away’ from the viewer. However, human observers show a ‘facing bias’: They perceive such a point-light walker as facing towards them in about 70-80% of the cases. In studies exploring the role of social and biological relevance as a possible account for the facing bias, we found a ‘figure gender effect’: Male point-light figures elicit a stronger facing bias than female point-light figures. Moreover, we also found an ‘observer gender effect’: The ‘figure gender effect’ was stronger for male than for female observers. In the present study we presented to 11 males and 11 females point-light walkers of which, very subtly, the perspective information was manipulated by modifying the earlier reported ‘perspective technique’. Proportions of ‘facing the viewer’ responses and reaction times were recorded. Results show that human observers, even in the absence of local shape or size cues, easily pick up on perspective cues, confirming recent demonstrations of high visual sensitivity to cues on whether another person is potentially approaching. We also found a consistent difference in how male and female observers respond to stimulus variations (figure gender or perspective cues) that cause variations in the perceived in-depth orientation of a point-light walker. Thus, the ‘figure gender effect’ is possibly caused by changes in the relative locations and motions of the dots that the perceptual system tends to interpret as perspective cues. Third, reaction time measures confirmed the existence of the facing bias and recent research showing faster detection of approaching than receding biological motion.
Collapse
Affiliation(s)
- Ben Schouten
- Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium.
| | | | | |
Collapse
|