1
|
Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
|
2
|
The integration of head and body cues during the perception of social interactions. Q J Exp Psychol (Hove) 2024; 77:776-788. [PMID: 37232389 PMCID: PMC10960325 DOI: 10.1177/17470218231181001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 03/10/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
Humans spend a large proportion of time participating in social interactions. The ability to accurately detect and respond to human interactions is vital for social functioning, from early childhood through to older adulthood. This detection ability arguably relies on integrating sensory information from the interactants. Within the visual modality, directional information from a person's eyes, head, and body are integrated to inform where another person is looking and who they are interacting with. To date, social cue integration research has focused largely on the perception of isolated individuals. Across two experiments, we investigated whether observers integrate body information with head information when determining whether two people are interacting, and manipulated frame of reference (one of the interactants facing observer vs. facing away from observer) and the eye-region visibility of the interactant. Results demonstrate that individuals integrate information from the body with head information when perceiving dyadic interactions, and that integration is influenced by the frame of reference and visibility of the eye-region. Interestingly, self-reported autistics traits were associated with a stronger influence of body information on interaction perception, but only when the eye-region was visible. This study investigated the recognition of dyadic interactions using whole-body stimuli while manipulating eye visibility and frame of reference, and provides crucial insights into social cue integration, as well as how autistic traits affect cue integration, during perception of social interactions.
Collapse
|
3
|
Correction to: 'A model of cue integration as vector summation in the insect brain' (2023) Mitchell et al.. Proc Biol Sci 2023; 290:20231993. [PMID: 37728276 PMCID: PMC10510441 DOI: 10.1098/rspb.2023.1993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 09/06/2023] [Indexed: 09/21/2023] Open
|
4
|
Multisensory causal inference is feature-specific, not object-based. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220345. [PMID: 37545302 PMCID: PMC10404918 DOI: 10.1098/rstb.2022.0345] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration depends on causal inference about the sensory signals. We tested whether implicit causal-inference judgements pertain to entire objects or focus on task-relevant object features. Participants in our study judged virtual visual, haptic and visual-haptic surfaces with respect to two features-slant and roughness-against an internal standard in a two-alternative forced-choice task. Modelling of participants' responses revealed that the degree to which their perceptual judgements were based on integrated visual-haptic information varied unsystematically across features. For example, a perceived mismatch between visual and haptic roughness would not deter the observer from integrating visual and haptic slant. These results indicate that participants based their perceptual judgements on a feature-specific selection of information, suggesting that multisensory causal inference proceeds not at the object level but at the level of single object features. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
|
5
|
A model of cue integration as vector summation in the insect brain. Proc Biol Sci 2023; 290:20230767. [PMID: 37357865 PMCID: PMC10291719 DOI: 10.1098/rspb.2023.0767] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/30/2023] [Indexed: 06/27/2023] Open
Abstract
Ball-rolling dung beetles are known to integrate multiple cues in order to facilitate their straight-line orientation behaviour. Recent work has suggested that orientation cues are integrated according to a vector sum, that is, compass cues are represented by vectors and summed to give a combined orientation estimate. Further, cue weight (vector magnitude) appears to be set according to cue reliability. This is consistent with the popular Bayesian view of cue integration: cues are integrated to reduce or minimize an agent's uncertainty about the external world. Integration of orientation cues is believed to occur at the input to the insect central complex. Here, we demonstrate that a model of the head direction circuit of the central complex, including plasticity in input synapses, can act as a substrate for cue integration as vector summation. Further, we show that cue influence is not necessarily driven by cue reliability. Finally, we present a dung beetle behavioural experiment which, in combination with simulation, strongly suggests that these beetles do not weight cues according to reliability. We suggest an alternative strategy whereby cues are weighted according to relative contrast, which can also explain previous results.
Collapse
|
6
|
The case against probabilistic inference: a new deterministic theory of 3D visual processing. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210458. [PMID: 36511407 PMCID: PMC9745883 DOI: 10.1098/rstb.2021.0458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
How the brain derives 3D information from inherently ambiguous visual input remains the fundamental question of human vision. The past two decades of research have addressed this question as a problem of probabilistic inference, the dominant model being maximum-likelihood estimation (MLE). This model assumes that independent depth-cue modules derive noisy but statistically accurate estimates of 3D scene parameters that are combined through a weighted average. Cue weights are adjusted based on the system representation of each module's output variability. Here I demonstrate that the MLE model fails to account for important psychophysical findings and, importantly, misinterprets the just noticeable difference, a hallmark measure of stimulus discriminability, to be an estimate of perceptual uncertainty. I propose a new theory, termed Intrinsic Constraint, which postulates that the visual system does not derive the most probable interpretation of the visual input, but rather, the most stable interpretation amid variations in viewing conditions. This goal is achieved with the Vector Sum model, which represents individual cue estimates as components of a multi-dimensional vector whose norm determines the combined output. This model accounts for the psychophysical findings cited in support of MLE, while predicting existing and new findings that contradict the MLE model. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
|
7
|
Abstract
The central complex of the insect midbrain is thought to coordinate insect guidance strategies. Computational models can account for specific behaviours, but their applicability across sensory and task domains remains untested. Here, we assess the capacity of our previous model (Sun et al. 2020) of visual navigation to generalise to olfactory navigation and its coordination with other guidance in flies and ants. We show that fundamental to this capacity is the use of a biologically plausible neural copy-and-shift mechanism that ensures sensory information is presented in a format compatible with the insect steering circuit regardless of its source. Moreover, the same mechanism is shown to allow the transfer cues from unstable/egocentric to stable/geocentric frames of reference, providing a first account of the mechanism by which foraging insects robustly recover from environmental disturbances. We propose that these circuits can be flexibly repurposed by different insect navigators to address their unique ecological needs.
Collapse
|
8
|
Abstract
Predictions of one's future memory performance-judgements of learning (JOLs)-are based on the cues that learners regard as diagnostic of memory performance. One of these cues is word frequency or how often words are experienced in the language. It is not clear, however, whether word frequency would affect JOLs when other cues are also available. The current study aims to close this gap by testing whether objective and subjective word frequency affect JOLs in the presence of font size as an additional cue. Across three experiments, participants studied words that varied in word frequency (Experiment 1: high and low objective frequency; Experiment 2: a whole continuum from high to low objective frequency; Experiment 3: high and low subjective and objective frequency) and were presented in a large (48pt) or a small (18pt) font size, made JOLs, and completed a free recall test. Results showed that people based their JOLs on both word frequency and font size. We conclude that word frequency is an important cue that affects metamemory even in multiple-cue situations.
Collapse
|
9
|
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex. Cell Rep 2021; 36:109669. [PMID: 34496249 PMCID: PMC8437084 DOI: 10.1016/j.celrep.2021.109669] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 05/25/2021] [Accepted: 08/13/2021] [Indexed: 12/01/2022] Open
Abstract
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
Collapse
|
10
|
Taking others into account: combining directly experienced and indirect information in schizophrenia. Brain 2021; 144:1603-1614. [PMID: 33829262 DOI: 10.1093/brain/awab065] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 12/11/2020] [Accepted: 01/05/2021] [Indexed: 11/14/2022] Open
Abstract
An abnormality in inference, resulting in distorted internal models of the world, has been argued to be a common mechanism underlying the heterogeneous psychopathology in schizophrenia. However, findings have been mixed as to wherein the abnormality lies and have typically failed to find convincing relations to symptoms. The limited and inconsistent findings may have been due to methodological limitations of the experimental design, such as conflating other factors (e.g. comprehension) with the inferential process of interest, and a failure to adequately assess and model the key aspects of the inferential process. Here, we investigated probabilistic inference based on multiple sources of information using a new digital version of the beads task, framed in a social context. Thirty-five patients with schizophrenia or schizoaffective disorder with a wide range of symptoms and 40 matched healthy control subjects performed the task, where they guessed the colour of the next marble drawn from a jar based on a sample from the jar as well as the choices and the expressed confidence of four people, each with their own independent sample (which was hidden from participant view). We relied on theoretically motivated computational models to assess which model best captured the inferential process and investigated whether it could serve as a mechanistic model for both psychotic and negative symptoms. We found that 'circular inference' best described the inference process, where patients over-weighed and overcounted direct experience and under-weighed information from others. Crucially, overcounting of direct experience was uniquely associated with most psychotic and negative symptoms. In addition, patients with worse social cognitive function had more difficulties using others' confidence to inform their choices. This difficulty was related to worse real-world functioning. The findings could not be easily ascribed to differences in working memory, executive function, intelligence or antipsychotic medication. These results suggest hallucinations, delusions and negative symptoms could stem from a common underlying abnormality in inference, where directly experienced information is assigned an unreasonable weight and taken into account multiple times. By this, even unreliable first-hand experiences may gain disproportionate significance. The effect could lead to false perceptions (hallucinations), false beliefs (delusions) and deviant social behaviour (e.g. loss of interest in others, bizarre and inappropriate behaviour). This may be particularly problematic for patients with social cognitive deficits, as they may fail to make use of corrective information from others, ultimately leading to worse social functioning.
Collapse
|
11
|
A Psychophysical Window onto the Subjective Experience of Compulsion. Brain Sci 2021; 11:brainsci11020182. [PMID: 33540916 PMCID: PMC7913241 DOI: 10.3390/brainsci11020182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/19/2021] [Accepted: 01/25/2021] [Indexed: 11/16/2022] Open
Abstract
In this perspective, we follow the idea that an integration of cognitive models with sensorimotor theories of compulsion is required to understand the subjective experience of compulsive action. We argue that cognitive biases in obsessive-compulsive disorder may obscure an altered momentary, pre-reflective experience of sensorimotor control, whose detection thus requires an implicit experimental operationalization. We propose that a classic psychophysical test exists that provides this implicit operationalization, i.e., the intentional binding paradigm. We show how intentional binding can pit two ideas against each other that are fundamental to current sensorimotor theories of compulsion, i.e., the idea of excessive conscious monitoring of action, and the idea that patients with obsessive-compulsive disorder compensate for diminished conscious access to "internal states", including states of the body, by relying on more readily observable proxies. Following these ideas, we develop concrete, testable hypotheses on how intentional binding changes under the assumption of different sensorimotor theories of compulsion. Furthermore, we demonstrate how intentional binding provides a touchstone for predictive coding accounts of obsessive-compulsive disorder. A thorough empirical test of the hypotheses developed in this perspective could help explain the puzzling, disabling phenomenon of compulsion, with implications for the normal subjective experience of human action.
Collapse
|
12
|
Dynamic Relationship between Sense of Agency and Post-Stroke Sensorimotor Deficits: A Longitudinal Case Study. Brain Sci 2020; 10:brainsci10050294. [PMID: 32429071 PMCID: PMC7288005 DOI: 10.3390/brainsci10050294] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 05/11/2020] [Accepted: 05/14/2020] [Indexed: 11/17/2022] Open
Abstract
Post-stroke sensorimotor deficits impair voluntary movements. This impairment may alter a person's sense of agency, which is the awareness of controlling one's actions. A previous study showed that post-stroke patients incorrectly aligned themselves with others' movements and proposed that their misattributions might be associated with their sensorimotor deficits. To investigate this hypothesis, the present study compared the agency dynamics in a post-stroke patient A (PA) with sensorimotor deficits, who rarely used her paretic upper limbs in her daily life to patient B (PB), who had a paretic upper limb with almost normal functions and activity. At the second, fourth, and eighth weeks following their strokes, PA and PB completed experiments where they performed horizontal movements while receiving visual feedback, and analyzed if the visual feedback represented their own or another's movements. Consequently, PB made no misattributions each week; whereas, PA made incorrect self-attributions of other's movements at the fourth week. Interestingly, this misattribution noticeably decreased at the eighth week, where PA, with an improved paretic upper limb, used her limb almost as much as before her stroke. These results suggest that the sense of agency alters according to the sensorimotor deficit severity and paretic upper limb activity.
Collapse
|
13
|
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
|
14
|
Integration of Motion and Form Cues for the Perception of Self-Motion in the Human Brain. J Neurosci 2020; 40:1120-1132. [PMID: 31826945 DOI: 10.1523/jneurosci.3225-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 11/27/2019] [Accepted: 12/02/2019] [Indexed: 11/21/2022] Open
Abstract
When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion [FoE]) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion- and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion- or the form-defined FoE shift was the same in the two types of stimuli, but the perceived heading direction shifted for the congruent, but not for the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and ROI-based multivoxel pattern analysis revealed that early visual areas V1, V2, and V3 responded to either the motion- or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a significantly higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing that area V3B/KO does not simply respond to motion and form cues but integrates these two cues for the perception of heading.SIGNIFICANCE STATEMENT Human survival relies on accurate perception of self-motion. The visual system uses both motion (optic flow) and form cues for the perception of the direction of self-motion (heading). Although human brain areas for processing optic flow and form structure are well identified, the areas responsible for integrating these two cues for the perception of self-motion remain unknown. We conducted fMRI experiments and used multivoxel pattern analysis technique to find human brain areas that can decode the shift in heading specified by each cue alone and the two cues combined. We found that motion and form cues are first processed in the early visual areas and then are likely integrated in the higher dorsal area V3B/KO for the final estimation of heading.
Collapse
|
15
|
Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
|
16
|
Abstract
People base judgements about their own memory processes on probabilistic cues such as the characteristics of study materials and study conditions. While research has largely focused on how single cues affect metamemory judgements, a recent study by Undorf, Söllner, and Bröder found that multiple cues affected people's predictions of their future memory performance (judgements of learning, JOLs). The present research tested whether this finding was indeed due to strategic integration of multiple cues in JOLs or, alternatively, resulted from people's reliance on a single unified feeling of ease. In Experiments 1 and 2, we simultaneously varied concreteness and emotionality of word pairs and solicited (a) pre-study JOLs that could be based only on the manipulated cues and (b) immediate JOLs that could be based both on the manipulated cues and on a feeling of ease. The results revealed similar amounts of cue integration in pre-study JOLs and immediate JOLs, regardless of whether cues varied in two easily distinguishable levels (Experiment 1) or on a continuum (Experiment 2). This suggested that people strategically integrated multiple cues in their immediate JOLs. Experiment 3 provided further evidence for this conclusion by showing that false explicit information about cue values affected immediate JOLs over and above actual cue values. Hence, we conclude that cue integration in JOLs involves strategic processes.
Collapse
|
17
|
Border ownership-dependent tilt aftereffect for shape defined by binocular disparity and motion parallax. J Neurophysiol 2019; 121:1917-1923. [PMID: 30917072 PMCID: PMC6589706 DOI: 10.1152/jn.00111.2019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviors is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border ownership cells (Zhou H, Friedman HS, von der Heydt R. J Neurosci 20: 6594–6611, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioral correlate supporting the existence of these cells in humans was demonstrated with two-dimensional luminance-defined objects (von der Heydt R, Macuda T, Qiu FT. J Opt Soc Am A Opt Image Sci Vis 22: 2222–2229, 2005). However, objects in our natural visual environments are often signaled by complex cues, such as motion and binocular disparity. Thus for border ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measured in humans (of both sexes) border ownership-dependent tilt aftereffects after adaptation to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Furthermore, we find that the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff HH, Mallot HA. J Opt Soc Am A 5: 1749–1758, 1988). These results suggest that border ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments. NEW & NOTEWORTHY Figure-ground segmentation is a critical function that may be supported by “border ownership” neural systems that conditionally respond to object borders. We measured border ownership-dependent tilt aftereffects to figures defined by motion parallax or binocular disparity and found aftereffects for both cues. These effects were transferable between cues but selective for figure-ground depth order, suggesting that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.
Collapse
|
18
|
Integrating Voice Quality Cues in the Pitch Perception of Speech and Non-speech Utterances. Front Psychol 2018; 9:2147. [PMID: 30555365 PMCID: PMC6281971 DOI: 10.3389/fpsyg.2018.02147] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/18/2018] [Indexed: 11/13/2022] Open
Abstract
Pitch perception plays a crucial role in speech processing. Since F0 is highly ambiguous and variable in the speech signal, effective pitch-range perception is important in perceiving the intended linguistic pitch targets. This study argues that the effectiveness of pitch-range perception can be achieved by taking advantage of other signal-internal information that co-varies with F0, such as voice quality cues. This study provides direct perceptual evidence that voice quality cues as an indicator of pitch ranges can effectively affect the pitch-height perception. A series of forced-choice pitch classification experiments with four spectral conditions were conducted to investigate the degree to which manipulating spectral slope affects pitch-height perception. Both non-speech and speech stimuli were investigated. The results suggest that the pitch classification function is significantly shifted under different spectral conditions. Listeners are likely to perceive a higher pitch when the spectrum has higher high-frequency energy (i.e., tenser phonation). The direction of the shift is consistent with the correlation between voice quality and pitch range. Moreover, cue integration is affected by the speech mode, where listeners are more sensitive to relative difference within an utterance when hearing speech stimuli. This study generally supports the hypothesis that voice quality is an important enhancement cue for pitch range.
Collapse
|
19
|
Using Motivated Cue Integration Theory to Understand a Moment-by-Moment Transformative Change: A New Look at the Focusing Technique. Front Hum Neurosci 2018; 12:307. [PMID: 30154705 PMCID: PMC6103000 DOI: 10.3389/fnhum.2018.00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/16/2018] [Indexed: 12/21/2022] Open
|
20
|
Figure and ground: how the visual cortex integrates local cues for global organization. J Neurophysiol 2018; 120:3085-3098. [PMID: 30044171 DOI: 10.1152/jn.00125.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Inferring figure-ground organization in two-dimensional images may require different complementary strategies. For isolated objects, it has been shown that mechanisms in visual cortex exploit the overall distribution of contours, but in images of cluttered scenes where the grouping of contours is not obvious, that strategy would fail. However, natural scenes contain local features, specifically contour junctions, that may contribute to the definition of object regions. To study the role of local features in the assignment of border ownership, we recorded single-cell activity from visual cortex in awake behaving Macaca mulatta. We tested configurations perceived as two overlapping figures in which T- and L-junctions depend on the direction of overlap, whereas the overall distribution of contours provides no valid information. While recording responses to the occluding contour, we varied direction of overlap and variably masked some of the critical contour features to determine their influences and their interactions. On average, most features influenced the responses consistently, producing either enhancement or suppression depending on border ownership. Different feature types could have opposite effects even at the same location. Features far from the receptive field produced effects as strong as near features and with the same short latency. Summation was highly nonlinear: any single feature produced more than two-thirds of the effect of all features together. These findings reveal fast and highly specific organization mechanisms, supporting a previously proposed model in which "grouping cells" integrate widely distributed edge signals with specific end-stopped signals to modulate the original edge signals by feedback. NEW & NOTEWORTHY Seeing objects seems effortless, but defining objects in a scene requires sophisticated neural mechanisms. For isolated objects, the visual cortex groups contours based on overall distribution, but this strategy does not work for cluttered scenes. Here, we demonstrate mechanisms that integrate local contour features like T- and L-junctions to resolve clutter. The process is fast, evaluates widely distributed features, and gives any single feature a decisive influence on figure-ground representation.
Collapse
|
21
|
Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals. Front Psychol 2018; 9:767. [PMID: 29875719 PMCID: PMC5974558 DOI: 10.3389/fpsyg.2018.00767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 04/30/2018] [Indexed: 11/23/2022] Open
Abstract
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Collapse
|
22
|
Lexical Segmentation in Artificial Word Learning: The Effects of Converging Sublexical Cues. LANGUAGE AND SPEECH 2018; 61:3-30. [PMID: 29280405 DOI: 10.1177/0023830917694664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This study examines how French listeners segment and learn new words of artificial languages varying in the presence of different combinations of sublexical segmentation cues. The first experiment investigated the contribution of three different types of sublexical cues (acoustic-phonetic, phonological and prosodic cues) to word learning. The second experiment explored how participants specifically exploited sublexical prosodic cues. Whereas complementary cues signaling word-initial and word-final boundaries had synergistic effects on word learning in the first experiment, the two manipulated prosodic cues redundantly signaling word-final boundaries in the second experiment were rank-ordered with final pitch variations being more weighted than final lengthening. These results are discussed in light of the notions of cue type, cue position and cue efficiency.
Collapse
|
23
|
Abstract
Gravity is a defining force that governs the evolution of mechanical forms, shapes and anchors our perception of the environment, and imposes fundamental constraints on our interactions with the world. Within the animal kingdom, humans are relatively unique in having evolved a vertical, bipedal posture. Although a vertical posture confers numerous benefits, it also renders us less stable than quadrupeds, increasing susceptibility to falls. The ability to accurately and precisely estimate our orientation relative to gravity is therefore of utmost importance. Here we review sensory information and computational processes underlying gravity estimation and verticality perception. Central to gravity estimation and verticality perception is multisensory cue combination, which serves to improve the precision of perception and resolve ambiguities in sensory representations by combining information from across the visual, vestibular, and somatosensory systems. We additionally review experimental paradigms for evaluating verticality perception, and discuss how particular disorders affect the perception of upright. Together, the work reviewed here highlights the critical role of multisensory cue combination in gravity estimation, verticality perception, and creating stable gravity-centered representations of our environment.
Collapse
|
24
|
Interaction of spatial and non-spatial cues in auditory stream segregation in the European starling. Eur J Neurosci 2017; 51:1191-1200. [PMID: 28922512 DOI: 10.1111/ejn.13716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 09/14/2017] [Accepted: 09/14/2017] [Indexed: 11/29/2022]
Abstract
Integrating sounds from the same source and segregating sounds from different sources in an acoustic scene are an essential function of the auditory system. Naturally, the auditory system simultaneously makes use of multiple cues. Here, we investigate the interaction between spatial cues and frequency cues in stream segregation of European starlings (Sturnus vulgaris) using an objective measure of perception. Neural responses to streaming sounds were recorded, while the bird was performing a behavioural task that results in a higher sensitivity during a one-stream than a two-stream percept. Birds were trained to detect an onset time shift of a B tone in an ABA- triplet sequence in which A and B could differ in frequency and/or spatial location. If the frequency difference or spatial separation between the signal sources or both were increased, the behavioural time shift detection performance deteriorated. Spatial separation had a smaller effect on the performance compared to the frequency difference and both cues additively affected the performance. Neural responses in the primary auditory forebrain were affected by the frequency and spatial cues. However, frequency and spatial cue differences being sufficiently large to elicit behavioural effects did not reveal correlated neural response differences. The difference between the neuronal response pattern and behavioural response is discussed with relation to the task given to the bird. Perceptual effects of combining different cues in auditory scene analysis indicate that these cues are analysed independently and given different weights suggesting that the streaming percept arises consecutively to initial cue analysis.
Collapse
|
25
|
Auditory Mismatch Negativity in Response to Changes of Counter-Balanced Interaural Time and Level Differences. Front Neurosci 2017; 11:387. [PMID: 28729820 PMCID: PMC5498526 DOI: 10.3389/fnins.2017.00387] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 06/20/2017] [Indexed: 11/13/2022] Open
Abstract
Interaural time differences (ITD) and interaural level differences (ILD) both signal horizontal sound source location. To achieve a unified percept of our acoustic environment, these two cues require integration. In the present study, we tested this integration of ITD and ILD with electroencephalography (EEG) by measuring the mismatch negativity (MMN). The MMN can arise in response to spatial changes and is at least partly generated in auditory cortex. In our study, we aimed at testing for an MMN in response to stimuli with counter-balanced ITD/ILD cues. To this end, we employed a roving oddball paradigm with alternating sound sequences in two types of blocks: (a) lateralized stimuli with congruently combined ITD/ILD cues and (b) midline stimuli created by counter-balanced, incongruently combined ITD/ILD cues. We observed a significant MMN peaking at about 112–128 ms after change onset for the congruent ITD/ILD cues, for both lower (0.5 kHz) and higher carrier frequency (4 kHz). More importantly, we also observed significant MMN peaking at about 129 ms for incongruently combined ITD/ILD cues, but this effect was only detectable in the lower frequency range (0.5 kHz). There were no significant differences of the MMN responses for the two types of cue combinations (congruent/incongruent). These results suggest that—at least in the lower frequency ranges (0.5 kHz)—ITD and ILD are processed independently at the level of the MMN in auditory cortex.
Collapse
|
26
|
Cue Integration for Continuous and Categorical Dimensions by Synesthetes. Multisens Res 2017; 30:207-234. [PMID: 31287069 DOI: 10.1163/22134808-00002559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 02/16/2017] [Indexed: 11/19/2022]
Abstract
For synesthetes, sensory or cognitive stimuli induce the perception of an additional sensory or cognitive stimulus. Grapheme-color synesthetes, for instance, consciously and consistently experience particular colors (e.g., fluorescent pink) when perceiving letters (e.g., u). As a phenomenon involving multiple stimuli within or across modalities, researchers have posited that synesthetes may integrate sensory cues differently than non-synesthetes. However, findings to date present mixed results concerning this hypothesis, with researchers reporting enhanced, depressed, or normal sensory integration for synesthetes. In this study we quantitatively evaluated the multisensory integration process of synesthetes and non-synesthetes using Bayesian principles, rather than employing multisensory illusions, to make inferences about the sensory integration process. In two studies we investigated synesthetes' sensory integration by comparing human behavior to that of an ideal observer. We found that synesthetes integrated cues for both continuous and categorical dimensions in a statistically optimal manner, matching the sensory integration behavior of controls. These findings suggest that synesthetes and controls utilize similar cue integration mechanisms, despite differences in how they perceive unimodal stimuli.
Collapse
|
27
|
Strength of Intentional Effort Enhances the Sense of Agency. Front Psychol 2016; 7:1165. [PMID: 27536267 PMCID: PMC4971100 DOI: 10.3389/fpsyg.2016.01165] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Accepted: 07/20/2016] [Indexed: 11/13/2022] Open
Abstract
Sense of agency (SoA) refers to the feeling of controlling one’s own actions, and the experience of controlling external events with one’s actions. The present study examined the effect of strength of intentional effort on SoA. We manipulated the strength of intentional effort using three types of buttons that differed in the amount of force required to depress them. We used a self-attribution task as an explicit measure of SoA. The results indicate that strength of intentional effort enhanced self-attribution when action-effect congruency was unreliable. We concluded that intentional effort importantly affects the integration of multiple cues affecting explicit judgments of agency when the causal relationship action and effect was unreliable.
Collapse
|
28
|
Abstract
In a series of recent experiments, we found that if rats are presented with two temporal cues, each signifying that reward will be delivered after a different duration elapses (e.g., tone-10 seconds / light-20 seconds), they will behave as if they have computed a weighted average of these respective durations. In the current article, we argue that this effect, referred to as "temporal averaging", can be understood within the context of Bayesian Decision Theory. Specifically, we propose and provide preliminary data showing that, when averaging, rats weight different durations based on the relative variability of the information their respective cues provide.
Collapse
|
29
|
Language Processing as Cue Integration: Grounding the Psychology of Language in Perception and Neurophysiology. Front Psychol 2016; 7:120. [PMID: 26909051 PMCID: PMC4754405 DOI: 10.3389/fpsyg.2016.00120] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Accepted: 01/22/2016] [Indexed: 12/25/2022] Open
Abstract
I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
Collapse
|
30
|
Flexible integration of visual cues in adolescents with autism spectrum disorder. Autism Res 2016; 9:272-81. [PMID: 26097109 PMCID: PMC4864758 DOI: 10.1002/aur.1509] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Accepted: 05/20/2015] [Indexed: 11/18/2022]
Abstract
Although children with autism spectrum disorder (ASD) show atypical sensory processing, evidence for impaired integration of multisensory information has been mixed. In this study, we took a Bayesian model-based approach to assess within-modality integration of congruent and incongruent texture and disparity cues to judge slant in typical and autistic adolescents. Human adults optimally combine multiple sources of sensory information to reduce perceptual variance but in typical development this ability to integrate cues does not develop until late childhood. While adults cannot help but integrate cues, even when they are incongruent, young children's ability to keep cues separate gives them an advantage in discriminating incongruent stimuli. Given that mature cue integration emerges in later childhood, we hypothesized that typical adolescents would show adult-like integration, combining both congruent and incongruent cues. For the ASD group there were three possible predictions (1) "no fusion": no integration of congruent or incongruent cues, like 6-year-old typical children; (2) "mandatory fusion": integration of congruent and incongruent cues, like typical adults; (3) "selective fusion": cues are combined when congruent but not incongruent, consistent with predictions of Enhanced Perceptual Functioning (EPF) theory. As hypothesized, typical adolescents showed significant integration of both congruent and incongruent cues. The ASD group showed results consistent with "selective fusion," integrating congruent but not incongruent cues. This allowed adolescents with ASD to make perceptual judgments which typical adolescents could not. In line with EPF, results suggest that perception in ASD may be more flexible and less governed by mandatory top-down feedback.
Collapse
|
31
|
The Sensory Ecology of Ant Navigation: From Natural Environments to Neural Mechanisms. ANNUAL REVIEW OF ENTOMOLOGY 2016; 61:63-76. [PMID: 26527301 DOI: 10.1146/annurev-ento-010715-023703] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Animals moving through the world are surrounded by potential information. But the components of this rich array that they extract will depend on current behavioral requirements and the animal's own sensory apparatus. Here, we consider the types of information available to social hymenopteran insects, with a specific focus on ants. This topic has a long history and much is known about how ants and other insects use idiothetic information, sky compasses, visual cues, and odor trails. Recent research has highlighted how insects use other sensory information for navigation, such as the olfactory cues provided by the environment. These cues are harder to understand because they submit less easily to anthropomorphic analysis. Here, we take an ecological approach, considering first what information is available to insects, then how different cues might interact, and finally we discuss potential neural correlates of these behaviors.
Collapse
|
32
|
Abstract
Effort and reward jointly shape many human decisions. Errors in predicting the required effort needed for a task can lead to suboptimal behavior. Here, we show that effort estimations can be biased when retrospectively reestimated following receipt of a rewarding outcome. These biases depend on the contingency between reward and task difficulty and are stronger for highly contingent rewards. Strikingly, the observed pattern accords with predictions from Bayesian cue integration, indicating humans deploy an adaptive and rational strategy to deal with inconsistencies between the efforts they expend and the ensuing rewards.
Collapse
|
33
|
Duration estimates within a modality are integrated sub-optimally. Front Psychol 2015; 6:1041. [PMID: 26321965 PMCID: PMC4532910 DOI: 10.3389/fpsyg.2015.01041] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Accepted: 07/08/2015] [Indexed: 11/15/2022] Open
Abstract
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task.
Collapse
|
34
|
Cue Integration: A Common Framework for Social Cognition and Physical Perception. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 8:296-312. [PMID: 26172972 DOI: 10.1177/1745691613475454] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Scientists examining how people understand other minds have long thought that this task must be something like how people perceive the physical world. This comparison has proven to be deeply generative, as models of physical perception and social cognition have evolved in parallel. In this article, I propose extending this classic analogy in a new direction by proposing cue integration as a common feature of social cognition and physical perception. When encountering complex social cues-which happens often-perceivers use multiple processes for understanding others' minds. Like physical senses (e.g., vision or audition), social cognitive processes have often been studied as though they operate in relative isolation. In the domain of physical perception, this assumption has broken down, following evidence that perception is instead characterized by pervasive integration of multisensory information. Such integration is, in turn, elegantly described by Bayesian inferential models. By adopting a similar cue integration framework, researchers can similarly understand and formally model the ways that we perceive others' minds based on complex social information.
Collapse
|
35
|
The architecture of embodied cue integration: insight from the "motivation as cognition" perspective. Front Psychol 2015; 6:658. [PMID: 26052294 PMCID: PMC4440347 DOI: 10.3389/fpsyg.2015.00658] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2014] [Accepted: 05/05/2015] [Indexed: 11/13/2022] Open
|
36
|
Abstract
The sense of agency (SoA) (i.e., the registration that I am the initiator and controller of my actions and relevant events) is associated with several affective dimensions. This makes it surprising that the emotion factor has been largely neglected in the field of agency research. Current empirical investigations of the SoA mainly focus on sensorimotor signals (i.e., efference copy) and cognitive cues (i.e., intentions, beliefs) and on how they are integrated. Here we argue that this picture is not sufficient to explain agency experience, since agency and emotions constantly interact in our daily life by several ways. Reviewing first recent empirical evidence, we show that self-action perception is in fact modulated by the affective valence of outcomes already at the sensorimotor level. We hypothesize that the "affective coding" between agency and action outcomes plays an essential role in agency processing, i.e., the prospective, immediate or retrospective shaping of agency representations by affective components. This affective coding of agency be differentially altered in various neuropsychiatric diseases (e.g., schizophrenia vs. depression), thus helping to explain the dysfunctions and content of agency experiences in these diseases.
Collapse
|
37
|
Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking. J Vis 2014; 14:24. [PMID: 24648194 DOI: 10.1167/14.3.24] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°-2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%-34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures.
Collapse
|
38
|
Integration of texture and disparity cues to surface slant in dorsal visual cortex. J Neurophysiol 2013; 110:190-203. [PMID: 23576705 DOI: 10.1152/jn.01055.2012] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues.
Collapse
|
39
|
Abstract
Previous studies with young infants revealed that young infants can distinguish between displays of possible or impossible figures, which may require detection of inconsistent depth relations among local line junctions that disrupt global object configurations. Here, we used an eye-tracking paradigm to record eye movements in young infants during an object discrimination task with matched pairs of possible and impossible figures. Our goal was to identify differential patterns of oculomotor activity as infants viewed pictures of possible and impossible objects. We predicted that infants would actively attend to specific pictorial depth cues that denote shape (e.g., T-junctions), and in the context of an impossible figure that they would fixate to a greater extent in anomalous regions of the display relative to other parts. By the age of 4 months, infants fixated reliably longer overall on displays of impossible vs. possible cubes, specifically within the critical region where the incompatible lines and irreconcilable depth relations were located, implying an early capacity for selective attention to critical line junction information and integration of local depth cues necessary to perceive object coherence.
Collapse
|
40
|
Effective integration of serially presented stochastic cues. J Vis 2012; 12:12. [PMID: 22911906 PMCID: PMC3556466 DOI: 10.1167/12.8.12] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2011] [Accepted: 06/18/2012] [Indexed: 11/24/2022] Open
Abstract
This study examines how people deal with inherently stochastic cues when estimating a latent environmental property. Seven cues to a hidden location were presented one at a time in rapid succession. The seven cues were sampled from seven different Gaussian distributions that shared a common mean but differed in precision (the reciprocal of variance). The experimental task was to estimate the common mean of the Gaussians from which the cues were drawn. Observers ran in two conditions on separate days. In the "decreasing precision" condition the seven cues were ordered from most precise to least precise. In the "increasing precision" condition this ordering was reversed. For each condition, we estimated the weight that each cue in the sequence had on observers' estimates and compared human performance to that of an ideal observer who maximizes expected gain. We found that observers integrated information from more than one cue, and that they adaptively gave more weight to more precise cues and less weight to less precise cues. However, they did not assign weights that would maximize their expected gain, even over the course of several hundred trials with corrective feedback. The cost to observers of their suboptimal performance was on average 16% of their maximum possible winnings.
Collapse
|
41
|
Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization cues. J Neurophysiol 2011; 106:974-85. [PMID: 21653729 PMCID: PMC3154809 DOI: 10.1152/jn.00384.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2011] [Accepted: 05/27/2011] [Indexed: 11/22/2022] Open
Abstract
Previous studies have demonstrated that single neurons in the central nucleus of the inferior colliculus (ICC) are sensitive to multiple sound localization cues. We investigated the hypothesis that ICC neurons are specialized to encode multiple sound localization cues that are aligned in space (as would naturally occur from a single broadband sound source). Sound localization cues including interaural time differences (ITDs), interaural level differences (ILDs), and spectral shapes (SSs) were measured in a marmoset monkey. Virtual space methods were used to generate stimuli with aligned and misaligned combinations of cues while recording in the ICC of the same monkey. Mutual information (MI) between spike rates and stimuli for aligned versus misaligned cues were compared. Neurons with best frequencies (BFs) less than ∼11 kHz mostly encoded information about a single sound localization cue, ITD or ILD depending on frequency, consistent with the dominance of ear acoustics by either ITD or ILD at those frequencies. Most neurons with BFs >11 kHz encoded information about multiple sound localization cues, usually ILD and SS, and were sensitive to their alignment. In some neurons MI between stimuli and spike responses was greater for aligned cues, while in others it was greater for misaligned cues. If SS cues were shifted to lower frequencies in the virtual space stimuli, a similar result was found for neurons with BFs <11 kHz, showing that the cue interaction reflects the spectra of the stimuli and not a specialization for representing SS cues. In general the results show that ICC neurons are sensitive to multiple localization cues if they are simultaneously present in the frequency response area of the neuron. However, the representation is diffuse in that there is not a specialization in the ICC for encoding aligned sound localization cues.
Collapse
|