1
|
Choe S, Kwon OS. An event-termination cue causes perceived time to dilate. Psychon Bull Rev 2024; 31:659-669. [PMID: 37653279 DOI: 10.3758/s13423-023-02368-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/13/2023] [Indexed: 09/02/2023]
Abstract
The perceived duration of time does not veridically reflect the physical duration but is distorted by various factors, such as the stimulus magnitude or the observer's emotional state. Here, we showed that knowledge about an event's termination time is another significant factor. We often experience time passage differently when we know that an event will terminate soon. To quantify this, we asked 33 university students to report a rotating clock hand's duration with or without a termination cue that indicated the position at which the clock hand disappeared. The results showed that the presence of the termination cue dilated perceived durations, and the dilating effect was larger when the stimulus duration was longer, or the speed of the rotating stimulus was slower. A control experiment with a start-cue excluded the possibility that the cue's mere existence caused the results. Further computational analyses based on the attention theory-of-time perception revealed that the size of dilation is best explained by neither an event's duration nor the distance traveled by the clock hand, but by how long the clock hand spends time near the termination cue. The results imply that an event-termination cue generates a field in which the perceived time dilates.
Collapse
Affiliation(s)
- Seonggyu Choe
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, Republic of Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
2
|
Wiesing M, Zimmermann E. Serial dependencies between locomotion and visual space. Sci Rep 2023; 13:3302. [PMID: 36849556 PMCID: PMC9970965 DOI: 10.1038/s41598-023-30265-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 02/20/2023] [Indexed: 03/01/2023] Open
Abstract
How do we know the spatial distance of objects around us? Only by physical interaction within an environment can we measure true physical distances. Here, we investigated the possibility that travel distances, measured during walking, could be used to calibrate visual spatial perception. The sensorimotor contingencies that arise during walking were carefully altered using virtual reality and motion tracking. Participants were asked to walk to a briefly highlighted location. During walking, we systematically changed the optic flow, i.e., the ratio between the visual and physical motion speed. Although participants remained unaware of this manipulation, they walked a shorter or longer distance as a function of the optic flow speed. Following walking, participants were required to estimate the perceived distance of visual objects. We found that visual estimates were serially dependent on the experience of the manipulated flow in the previous trial. Additional experiments confirmed that to affect visual perception, both visual and physical motion are required. We conclude that the brain constantly uses movements to measure space for both, actions, and perception.
Collapse
Affiliation(s)
- Michael Wiesing
- Institute for Experimental Psychology, Heinrich Heine University Duesseldorf, Düsseldorf, Germany.
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Duesseldorf, Düsseldorf, Germany
| |
Collapse
|
3
|
Alefantis P, Lakshminarasimhan K, Avila E, Noel JP, Pitkow X, Angelaki DE. Sensory Evidence Accumulation Using Optic Flow in a Naturalistic Navigation Task. J Neurosci 2022; 42:5451-5462. [PMID: 35641186 PMCID: PMC9270913 DOI: 10.1523/jneurosci.2203-21.2022] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 04/01/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022] Open
Abstract
Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks.SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.
Collapse
Affiliation(s)
- Panos Alefantis
- Center for Neural Science, New York University, New York, New York 10003
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, New York 10003
| | - Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York 10003
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005-1892
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas 77030
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York 10003
- Tandon School of Engineering, New York University, New York, New York 11201
| |
Collapse
|
4
|
Negen J, Bird LA, Nardini M. An adaptive cue selection model of allocentric spatial reorientation. J Exp Psychol Hum Percept Perform 2021; 47:1409-1429. [PMID: 34766823 PMCID: PMC8582329 DOI: 10.1037/xhp0000950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
After becoming disoriented, an organism must use the local environment to reorient and recover vectors to important locations. A new theory, adaptive combination, suggests that the information from different spatial cues is combined with Bayesian efficiency during reorientation. To test this further, we modified the standard reorientation paradigm to be more amenable to Bayesian cue combination analyses while still requiring reorientation in an allocentric (i.e., world-based, not egocentric) frame. Twelve adults and 20 children at ages 5 to 7 years old were asked to recall locations in a virtual environment after a disorientation. Results were not consistent with adaptive combination. Instead, they are consistent with the use of the most useful (nearest) single landmark in isolation. We term this adaptive selection. Experiment 2 suggests that adults also use the adaptive selection method when they are not disoriented but are still required to use a local allocentric frame. This suggests that the process of recalling a location in the allocentric frame is typically guided by the single most useful landmark rather than a Bayesian combination of landmarks. These results illustrate that there can be important limits to Bayesian theories of the cognition, particularly for complex tasks such as allocentric recall. Whether studying the development of children’s spatial cognition, creating artificial intelligence with human-like capacities, or designing civic spaces, we can benefit from a strong understanding of how humans process the space around them. Here we tested a prominent theory that brings together statistical theory and psychological theory (Bayesian models of perception and memory) but found that it could not satisfactorily explain our data. Our findings suggest that when tracking the spatial relations between objects from different viewpoints, rather than efficiently combining all the available landmarks, people often fall back to the much simpler method of tracking the spatial relation to the nearest landmark.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University
| | | | | |
Collapse
|
5
|
Disrupting Short-Term Memory Maintenance in Premotor Cortex Affects Serial Dependence in Visuomotor Integration. J Neurosci 2021; 41:9392-9402. [PMID: 34607968 DOI: 10.1523/jneurosci.0380-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/13/2021] [Accepted: 09/19/2021] [Indexed: 11/21/2022] Open
Abstract
Human behavior is biased by past experience. For example, when intercepting a moving target, the speed of previous targets will bias responses in future trials. Neural mechanisms underlying this so-called serial dependence are still under debate. Here, we tested the hypothesis that the previous trial leaves a neural trace in brain regions associated with encoding task-relevant information in visual and/or motor regions. We reasoned that injecting noise by means of transcranial magnetic stimulation (TMS) over premotor and visual areas would degrade such memory traces and hence reduce serial dependence. To test this hypothesis, we applied bursts of TMS pulses to right visual motion processing region hV5/MT+ and to left dorsal premotor cortex (PMd) during intertrial intervals of a coincident timing task performed by twenty healthy human participants (15 female). Without TMS, participants presented a bias toward the speed of the previous trial when intercepting moving targets. TMS over PMd decreased serial dependence in comparison to the control Vertex stimulation, whereas TMS applied over hV5/MT+ did not. In addition, TMS seems to have specifically affected the memory trace that leads to serial dependence, as we found no evidence that participants' behavior worsened after applying TMS. These results provide causal evidence that an implicit short-term memory mechanism in premotor cortex keeps information from one trial to the next, and that this information is blended with current trial information so that it biases behavior in a visuomotor integration task with moving objects.SIGNIFICANCE STATEMENT Human perception and action are biased by the recent past. The origin of such serial bias is still not fully understood, but a few components seem to be fundamental for its emergence: the brain needs to keep previous trial information in short-term memory and blend it with incoming information. Here, we present evidence that a premotor area has a potential role in storing previous trial information in short-term memory in a visuomotor task and that this information is responsible for biasing ongoing behavior. These results corroborate the perspective that areas associated with processing information of a stimulus or task also participate in maintaining that information in short-term memory even when this information is no longer relevant for current behavior.
Collapse
|
6
|
Meirhaeghe N, Sohn H, Jazayeri M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 2021; 109:2995-3011.e5. [PMID: 34534456 PMCID: PMC9737059 DOI: 10.1016/j.neuron.2021.08.025] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 07/02/2021] [Accepted: 08/18/2021] [Indexed: 12/14/2022]
Abstract
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
7
|
Park WJ, Schauder KB, Kwon OS, Bennetto L, Tadin D. Atypical visual motion prediction abilities in autism spectrum disorder. Clin Psychol Sci 2021; 9:944-960. [PMID: 34721951 DOI: 10.1177/2167702621991803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A recent theory posits that prediction deficits may underlie the core symptoms in autism spectrum disorder (ASD). However, empirical evidence for this hypothesis is minimal. Using a visual extrapolation task, we tested motion prediction abilities in children and adolescents with and without ASD. We examined the factors known to be important for motion prediction: the central-tendency response bias and smooth pursuit eye movements. In ASD, response biases followed an atypical trajectory that was dominated by early responses. This differed from controls who exhibited response biases that reflected a gradual accumulation of knowledge about stimulus statistics. Moreover, while better smooth pursuit eye movements for the moving object were linked to more accurate motion prediction in controls, in ASD, better smooth pursuit was counterintuitively linked to a more pronounced early response bias. Together, these results demonstrate atypical visual prediction abilities in ASD and offer insights into possible mechanisms underlying the observed differences.
Collapse
Affiliation(s)
- Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA, 98195
| | - Kimberly B Schauder
- Center for Autism Spectrum Disorders, Children's National Hospital, Rockville, MD, 20850
| | - Oh-Sang Kwon
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Loisa Bennetto
- Department of Psychology, University of Rochester, Rochester, NY, 14627.,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627.,Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, 14642
| | - Duje Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627.,Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, 14642.,Center for Visual Science, University of Rochester, Rochester, NY, 14627.,Department of Ophthalmology, University of Rochester Medical Center, Rochester, NY, 14642
| |
Collapse
|
8
|
Temporal dynamics of implicit memory underlying serial dependence. Mem Cognit 2021; 50:449-458. [PMID: 34374026 DOI: 10.3758/s13421-021-01221-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/08/2022]
Abstract
Serial dependence is the effect in which the immediately preceding trial influences participants' responses to the current stimulus. But for how long does this bias last in the absence of interference from other stimuli? Here, we had 20 healthy young adult participants (12 women) perform a coincident timing task using different inter-trial intervals to characterize the serial dependence effect as the time between trials increases. Our results show that serial dependence abruptly decreases from 0.1 s to 1 s inter-trial interval, but it remains pronounced after that for up to 8 s. In addition, participants' response variability slightly decreases over longer intervals. We discuss these results in light of recent models suggesting that serial dependence might rely on a short-term memory trace kept through changes in synaptic weights, which might explain its long duration and apparent stability over time.
Collapse
|
9
|
Arthur T, Vine S, Brosnan M, Buckingham G. Predictive sensorimotor control in autism. Brain 2021; 143:3151-3163. [PMID: 32974646 DOI: 10.1093/brain/awaa243] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/07/2020] [Accepted: 06/16/2020] [Indexed: 01/08/2023] Open
Abstract
Autism spectrum disorder has been characterized by atypicalities in how predictions and sensory information are processed in the brain. To shed light on this relationship in the context of sensorimotor control, we assessed prediction-related measures of cognition, perception, gaze and motor functioning in a large general population (n = 92; Experiment 1) and in clinically diagnosed autistic participants (n = 29; Experiment 2). In both experiments perception and action were strongly driven by prior expectations of object weight, with large items typically predicted to weigh more than equally-weighted smaller ones. Interestingly, these predictive action models were used comparably at a sensorimotor level in both autistic and neurotypical individuals with varying levels of autistic-like traits. Specifically, initial fingertip force profiles and resulting action kinematics were both scaled according to participants' pre-lift heaviness estimates, and generic visual sampling behaviours were notably consistent across groups. These results suggest that the weighting of prior information is not chronically underweighted in autism, as proposed by simple Bayesian accounts of the disorder. Instead, our results cautiously implicate context-sensitive processing mechanisms, such as precision modulation and hierarchical volatility inference. Together, these findings present novel implications for both future scientific investigations and the autism community.
Collapse
Affiliation(s)
- Tom Arthur
- College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK.,Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - Sam Vine
- College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK
| | - Mark Brosnan
- Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - Gavin Buckingham
- College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK
| |
Collapse
|
10
|
De Azevedo Neto RM. Commentary: Probabilistic Representation in Human Visual Cortex Reflects Uncertainty in Serial Decisions. Front Hum Neurosci 2020; 14:580581. [PMID: 33192413 PMCID: PMC7609893 DOI: 10.3389/fnhum.2020.580581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/10/2020] [Indexed: 11/20/2022] Open
|
11
|
The Difficulty of Effectively Using Allocentric Prior Information in a Spatial Recall Task. Sci Rep 2020; 10:7000. [PMID: 32332793 PMCID: PMC7181880 DOI: 10.1038/s41598-020-62775-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/30/2022] Open
Abstract
Prior information represents the long-term statistical structure of an environment. For example, colds develop more often than throat cancer, making the former a more likely diagnosis for a sore throat. There is ample evidence for effective use of prior information during a variety of perceptual tasks, including the ability to recall locations using an egocentric (self-based) frame. However, it is not yet known if people can use prior information effectively when using an allocentric (world-based) frame. Forty-eight adults were shown sixty sets of three target locations in a sparse virtual environment with three beacons. The targets were drawn from one of four prior distributions. They were then asked to point to the targets after a delay and a change in perspective. While searches were biased towards the beacons, we did not find any evidence that participants successfully exploited the prior distributions of targets. These results suggest that allocentric reasoning does not conform to normative Bayesian models: we saw no evidence for use of priors in our cognitively-complex (allocentric) task, unlike in previous, simpler (egocentric) recall tasks. It is possible that this reflects the high biological cost of processing precise allocentric information.
Collapse
|
12
|
Itoh TD, Takeya R, Tanaka M. Spatial and temporal adaptation of predictive saccades based on motion inference. Sci Rep 2020; 10:5280. [PMID: 32210297 PMCID: PMC7093452 DOI: 10.1038/s41598-020-62211-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 03/11/2020] [Indexed: 11/09/2022] Open
Abstract
Moving objects are often occluded behind larger, stationary objects, but we can easily predict when and where they reappear. Here, we show that the prediction of object reappearance is subject to adaptive learning. When monkeys generated predictive saccades to the location of target reappearance, systematic changes in the location or timing of target reappearance independently altered the endpoint or latency of the saccades. Furthermore, spatial adaptation of predictive saccades did not alter visually triggered reactive saccades, whereas adaptation of reactive saccades altered the metrics of predictive saccades. Our results suggest that the extrapolation of motion trajectory may be subject to spatial and temporal recalibration mechanisms located upstream from the site of reactive saccade adaptation. Repetitive exposure of visual error for saccades induces qualitatively different adaptation, which might be attributable to different regions in the cerebellum that regulate learning of trajectory prediction and saccades.
Collapse
Affiliation(s)
- Takeshi D Itoh
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, 060-8638, Japan.,Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, 630-0192, Japan
| | - Ryuji Takeya
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, 060-8638, Japan
| | - Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, 060-8638, Japan.
| |
Collapse
|
13
|
Abstract
Abstract
We agree with the authors regarding the utility of viewing cognition as resulting from an optimal use of limited resources. Here, we advocate for extending this approach to the study of cognitive development, which we feel provides particularly powerful insight into the debate between bounded optimality and true sub-optimality, precisely because young children have limited computational and cognitive resources.
Collapse
|
14
|
Bejjanki VR, Randrup ER, Aslin RN. Young children combine sensory cues with learned information in a statistically efficient manner: But task complexity matters. Dev Sci 2019; 23:e12912. [PMID: 31608526 DOI: 10.1111/desc.12912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 07/31/2019] [Accepted: 10/08/2019] [Indexed: 11/29/2022]
Abstract
Human adults are adept at mitigating the influence of sensory uncertainty on task performance by integrating sensory cues with learned prior information, in a Bayes-optimal fashion. Previous research has shown that young children and infants are sensitive to environmental regularities, and that the ability to learn and use such regularities is involved in the development of several cognitive abilities. However, it has also been reported that children younger than 8 do not combine simultaneously available sensory cues in a Bayes-optimal fashion. Thus, it remains unclear whether, and by what age, children can combine sensory cues with learned regularities in an adult manner. Here, we examine the performance of 6- to 7-year-old children when tasked with localizing a 'hidden' target by combining uncertain sensory information with prior information learned over repeated exposure to the task. We demonstrate that 6- to 7-year-olds learn task-relevant statistics at a rate on par with adults, and like adults, are capable of integrating learned regularities with sensory information in a statistically efficient manner. We also show that variables such as task complexity can influence young children's behavior to a greater extent than that of adults, leading their behavior to look sub-optimal. Our findings have important implications for how we should interpret failures in young children's ability to carry out sophisticated computations. These 'failures' need not be attributed to deficits in the fundamental computational capacity available to children early in development, but rather to ancillary immaturities in general cognitive abilities that mask the operation of these computations in specific situations.
Collapse
Affiliation(s)
- Vikranth R Bejjanki
- Department of Psychology, Hamilton College, Clinton, NY, USA.,Program in Neuroscience, Hamilton College, Clinton, NY, USA
| | - Emily R Randrup
- Department of Psychology, Hamilton College, Clinton, NY, USA
| | | |
Collapse
|
15
|
Scene Representations Conveyed by Cortical Feedback to Early Visual Cortex Can Be Described by Line Drawings. J Neurosci 2019; 39:9410-9423. [PMID: 31611306 PMCID: PMC6867807 DOI: 10.1523/jneurosci.0852-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 08/27/2019] [Accepted: 09/23/2019] [Indexed: 11/25/2022] Open
Abstract
Human behavior is dependent on the ability of neuronal circuits to predict the outside world. Neuronal circuits in early visual areas make these predictions based on internal models that are delivered via non-feedforward connections. Despite our extensive knowledge of the feedforward sensory features that drive cortical neurons, we have a limited grasp on the structure of the brain's internal models. Progress in neuroscience therefore depends on our ability to replicate the models that the brain creates internally. Here we record human fMRI data while presenting partially occluded visual scenes. Visual occlusion allows us to experimentally control sensory input to subregions of visual cortex while internal models continue to influence activity in these regions. Because the observed activity is dependent on internal models, but not on sensory input, we have the opportunity to map visual features conveyed by the brain's internal models. Our results show that activity related to internal models in early visual cortex are more related to scene-specific features than to categorical or depth features. We further demonstrate that behavioral line drawings provide a good description of internal model structure representing scene-specific features. These findings extend our understanding of internal models, showing that line drawings provide a window into our brains' internal models of vision. SIGNIFICANCE STATEMENT We find that fMRI activity patterns corresponding to occluded visual information in early visual cortex fill in scene-specific features. Line drawings of the missing scene information correlate with our recorded activity patterns, and thus to internal models. Despite our extensive knowledge of the sensory features that drive cortical neurons, we have a limited grasp on the structure of our brains' internal models. These results therefore constitute an advance to the field of neuroscience by extending our knowledge about the models that our brains construct to efficiently represent and predict the world. Moreover, they link a behavioral measure to these internal models, which play an active role in many components of human behavior, including visual predictions, action planning, and decision making.
Collapse
|
16
|
Domínguez-Zamora FJ, Gunn SM, Marigold DS. Adaptive Gaze Strategies to Reduce Environmental Uncertainty During a Sequential Visuomotor Behaviour. Sci Rep 2018; 8:14112. [PMID: 30237587 PMCID: PMC6148321 DOI: 10.1038/s41598-018-32504-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 09/10/2018] [Indexed: 11/17/2022] Open
Abstract
People must decide where, when, and for how long to allocate gaze to perform different motor behaviours. However, the factors guiding gaze during these ongoing, natural behaviours are poorly understood. Gaze shifts help acquire information, suggesting that people should direct gaze to locations where environmental details most relevant to the task are uncertain. To explore this, human subjects stepped on a series of targets as they walked. We used different levels of target uncertainty, and through instruction, altered the importance of (or subjective value assigned to) foot-placement accuracy. Gaze time on targets increased with greater target uncertainty when precise foot placement was more important, and these longer gaze times associated with reduced foot-placement error. Gaze times as well as the gaze shifts to and from targets relative to stepping differed depending on the target's position in the sequence and uncertainty level. Overall, we show that gaze is allocated to reduce uncertainty about target locations, and this depends on the value of this information gain for successful task performance. Furthermore, we show that the spatial-temporal pattern of gaze to resolve uncertainty changes with the evolution of the motor behaviour, indicating a flexible strategy to plan and control movement.
Collapse
Affiliation(s)
- F Javier Domínguez-Zamora
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada
| | - Shaila M Gunn
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada
| | - Daniel S Marigold
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada.
- Behavioural and Cognitive Neuroscience Institute, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada.
| |
Collapse
|
17
|
Egger SW, Jazayeri M. A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise. Sci Rep 2018; 8:12597. [PMID: 30135441 PMCID: PMC6105733 DOI: 10.1038/s41598-018-30722-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 08/02/2018] [Indexed: 11/14/2022] Open
Abstract
Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high.
Collapse
Affiliation(s)
- Seth W Egger
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA. .,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
18
|
Abstract
Existing theories suggest that reacting to dynamic stimuli is made possible by relying on internal estimates of kinematic variables. For example, to catch a bouncing ball the brain relies on the position and speed of the ball. However, when kinematic information is unreliable one may additionally rely on temporal cues. In the bouncing ball example, when visibility is low one may benefit from the temporal information provided by the sound of the bounces. Our work provides evidence that humans rely on such temporal cues and automatically integrate them with kinematic information to optimize their performance. This finding reveals a hitherto unappreciated role of the brain’s timing mechanisms in sensorimotor function. To coordinate movements with events in a dynamic environment the brain has to anticipate when those events occur. A classic example is the estimation of time to contact (TTC), that is, when an object reaches a target. It is thought that TTC is estimated from kinematic variables. For example, a tennis player might use an estimate of distance (d) and speed (v) to estimate TTC (TTC = d/v). However, the tennis player may instead estimate TTC as twice the time it takes for the ball to move from the serve line to the net line. This latter strategy does not rely on kinematics and instead computes TTC solely from temporal cues. Which of these two strategies do humans use to estimate TTC? Considering that both speed and time estimates are inherently uncertain and the ability of the human brain to combine different sources of information, we hypothesized that humans estimate TTC by integrating speed information with temporal cues. We evaluated this hypothesis systematically using psychophysics and Bayesian modeling. Results indicated that humans rely on both speed information and temporal cues and integrate them to optimize their TTC estimates when both cues are present. These findings suggest that the brain’s timing mechanisms are actively engaged when interacting with dynamic stimuli.
Collapse
|
19
|
Snow M, Coen-Cagli R, Schwartz O. Adaptation in the visual cortex: a case for probing neuronal populations with natural stimuli. F1000Res 2017; 6:1246. [PMID: 29034079 PMCID: PMC5532795 DOI: 10.12688/f1000research.11154.1] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/24/2017] [Indexed: 12/19/2022] Open
Abstract
The perception of, and neural responses to, sensory stimuli in the present are influenced by what has been observed in the past—a phenomenon known as adaptation. We focus on adaptation in visual cortical neurons as a paradigmatic example. We review recent work that represents two shifts in the way we study adaptation, namely (i) going beyond single neurons to study adaptation in populations of neurons and (ii) going beyond simple stimuli to study adaptation to natural stimuli. We suggest that efforts in these two directions, through a closer integration of experimental and modeling approaches, will enable a more complete understanding of cortical processing in natural environments.
Collapse
Affiliation(s)
- Michoel Snow
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, 10461, USA.,Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, 10461, USA
| | - Ruben Coen-Cagli
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, 10461, USA.,Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, 10461, USA
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, 33146, USA
| |
Collapse
|
20
|
Abstract
Blindsight patients with damage to the visual cortex can discriminate objects but report no conscious visual experience. This provides an intriguing opportunity to allow the study of subjective awareness in isolation from objective performance capacity. However, blindsight is rare, so one promising way to induce the effect in neurologically intact observers is to apply transcranial magnetic stimulation (TMS) to the visual cortex. Here, we used a recently-developed criterion-free method to conclusively rule out an important alternative interpretation of TMS-induced performance without awareness: that TMS-induced blindsight may be just due to conservative reporting biases for conscious perception. Critically, using this criterion-free paradigm we have previously shown that introspective judgments were optimal even under visual masking. However, here under TMS, observers were suboptimal, as if they were metacognitively blind to the visual disturbances caused by TMS. We argue that metacognitive judgments depend on observers' internal statistical models of their own perceptual systems, and introspective suboptimality arises when external perturbations abruptly make those models invalid - a phenomenon that may also be happening in actual blindsight.
Collapse
|
21
|
Abstract
To enable effective interaction with the environment, the brain combines noisy sensory information with expectations based on prior experience. There is ample evidence showing that humans can learn statistical regularities in sensory input and exploit this knowledge to improve perceptual decisions and actions. However, fundamental questions remain regarding how priors are learned and how they generalize to different sensory and behavioral contexts. In principle, maintaining a large set of highly specific priors may be inefficient and restrict the speed at which expectations can be formed and updated in response to changes in the environment. However, priors formed by generalizing across varying contexts may not be accurate. Here, we exploit rapidly induced contextual biases in duration reproduction to reveal how these competing demands are resolved during the early stages of prior acquisition. We show that observers initially form a single prior by generalizing across duration distributions coupled with distinct sensory signals. In contrast, they form multiple priors if distributions are coupled with distinct motor outputs. Together, our findings suggest that rapid prior acquisition is facilitated by generalization across experiences of different sensory inputs but organized according to how that sensory information is acted on.
Collapse
|
22
|
Bejjanki VR, Knill DC, Aslin RN. Learning and inference using complex generative models in a spatial localization task. J Vis 2016; 16:9. [PMID: 26967015 PMCID: PMC4790422 DOI: 10.1167/16.5.9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
Collapse
|
23
|
Wiener M, Michaelis K, Thompson JC. Functional correlates of likelihood and prior representations in a virtual distance task. Hum Brain Mapp 2016; 37:3172-87. [PMID: 27167875 DOI: 10.1002/hbm.23232] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 04/18/2016] [Indexed: 12/11/2022] Open
Abstract
Spatial navigation is an imperative cognitive function, in which individuals must interact with their environment in order to accurately reach a destination. Previous research has demonstrated that, when traveling a predetermined distance, humans must balance between noise in the measurement process and the prior history of traveled distances. This tradeoff has recently been formally described using Bayesian estimation; however, the neural correlates of Bayesian estimation during distance reproduction have yet to be investigated. Here, human subjects performed a virtual reality distance reproduction task during functional Magnetic Resonance Imaging (fMRI), in which they were required to reproduce various traveled distances in the absence of overt navigational cues. As previously demonstrated, subjects exhibited a central tendency effect, wherein reproduced distances gravitated to the mean of the stimulus set. fMRI activity during this task revealed distance-sensitive activity in a network of regions, including prefrontal and hippocampal regions. Using a computational index of central tendency, we found that activity in the retrosplenial cortex, a region highly implicated in spatial navigation, negatively covaried between subjects with the degree of central tendency observed; conversely, we found that activity in the anterior hippocampus/amygdala complex was positively correlated with the central tendency effect of gravitating to the average reproduced distance. These findings suggest dissociable roles for the retrosplenial cortex and hippocampal complex during distance reproduction, with both regions coordinating with the prefrontal cortex the influence of prior history of the environment with present experience. Hum Brain Mapp 37:3172-3187, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Martin Wiener
- Department of Psychology, George Mason University, Fairfax, Virginia
| | - Kelly Michaelis
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | - James C Thompson
- Department of Psychology, George Mason University, Fairfax, Virginia
| |
Collapse
|
24
|
Bhardwaj M, van den Berg R, Ma WJ, Josić K. Do People Take Stimulus Correlations into Account in Visual Search? PLoS One 2016; 11:e0149402. [PMID: 26963498 PMCID: PMC4786311 DOI: 10.1371/journal.pone.0149402] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2015] [Accepted: 02/01/2016] [Indexed: 11/19/2022] Open
Abstract
In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks.
Collapse
Affiliation(s)
- Manisha Bhardwaj
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Ronald van den Berg
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
- Center for Neural Science and Department of Psychology, New York University, New York, New York, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
25
|
Kumar N, Mutha PK. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli. J Neurophysiol 2016; 115:1654-63. [PMID: 26823516 PMCID: PMC4808085 DOI: 10.1152/jn.00850.2015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Accepted: 01/26/2016] [Indexed: 11/22/2022] Open
Abstract
The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.
Collapse
Affiliation(s)
- Neeraj Kumar
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India; and
| | - Pratik K Mutha
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India; and Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India
| |
Collapse
|
26
|
Yavari F, Towhidkhah F, Ahmadi-Pajouh MA, Darainy M. The role of internal forward models and proprioception in hand position estimation. J Integr Neurosci 2015; 14:403-18. [PMID: 26307154 DOI: 10.1142/s0219635215500168] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Our ability to properly move and react in different situations is largely dependent on our perception of our limbs' position. At least three sources - vision, proprioception, and internal forward models (FMs) - seem to contribute to this perception. To the best of our knowledge, the effect of each source has not been studied individually. Specifically, role of FM has been ignored in some previous studies. We hypothesized that FM has a critical role in subjects' perception which needs to be considered in the relevant studies to obtain more reliable results. Therefore, we designed an experiment with the goal of investigating FM and proprioception role in subjects' perception of their hand's position. Three groups of subjects were recruited in the study. Based on the experiment design, it was supposed that subjects in different groups relied on proprioception, FM, and both of them for estimating their unseen hand's position. Comparing the results of three groups revealed significant difference between their estimation' errors. FM provided minimum estimation error, while proprioception had a bias error in the tested region. Integrating proprioception with FM decreased this error. Integration of two Gaussian functions, fitted to the error distribution of FM and proprioception groups, was simulated and created a mean error value almost similar to the experimental observation. These results suggest that FM role needs to be considered when studying the perceived position of the limbs. This can lead to gain better insights into the mechanisms underlying the perception of our limbs' position which might have potential clinical and rehabilitation applications, e.g., in the postural control of elderly which are at high risk of falls and injury because of deterioration of their perception with age.
Collapse
Affiliation(s)
- Fatemeh Yavari
- * Neurocognitive Laboratory, Iranian National Center for Addiction Studies (INCAS), Tehran University of Medical Sciences, Tehran, Iran.,† Biomedical Engineering Department, Amirkabir University of Technology, Tehran, Iran
| | - Farzad Towhidkhah
- † Biomedical Engineering Department, Amirkabir University of Technology, Tehran, Iran
| | | | - Mohammad Darainy
- ‡ Department of Psychology, McGill University, Montreal, QC, Canada
| |
Collapse
|
27
|
Valadao DF, Anderson B, Danckert J. Examining the influence of working memory on updating mental models. Q J Exp Psychol (Hove) 2014; 68:1442-56. [PMID: 25406912 DOI: 10.1080/17470218.2014.989866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The ability to accurately build and update mental representations of our environment depends on our ability to integrate information over a variety of time scales and detect changes in the regularity of events. As such, the cognitive mechanisms that support model building and updating are likely to interact with those involved in working memory (WM). To examine this, we performed three experiments that manipulated WM demands concurrently with the need to attend to regularities in other stimulus properties (i.e., location and shape). That is, participants completed a prediction task while simultaneously performing an n-back WM task with either no load or a moderate load. The distribution of target locations (Experiment 1) or shapes (Experiments 2 and 3) included some level of probabilistic regularity, which, unbeknown to participants, changed abruptly within each block. Moderate WM load hampered the ability to benefit from target regularities and to adapt to changes in those regularities (i.e., the prediction task). This was most pronounced when both prediction and WM requirements shared the same target feature. Our results show that representational updating depends on free WM resources in a domain-specific fashion.
Collapse
Affiliation(s)
- Derick F Valadao
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | | | | |
Collapse
|
28
|
Wiener M, Thompson JC, Coslett HB. Continuous carryover of temporal context dissociates response bias from perceptual influence for duration. PLoS One 2014; 9:e100803. [PMID: 24963624 PMCID: PMC4071004 DOI: 10.1371/journal.pone.0100803] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2014] [Accepted: 05/29/2014] [Indexed: 12/05/2022] Open
Abstract
Recent experimental evidence suggests that the perception of temporal intervals is influenced by the temporal context in which they are presented. A longstanding example is the time-order-error, wherein the perception of two intervals relative to one another is influenced by the order in which they are presented. Here, we test whether the perception of temporal intervals in an absolute judgment task is influenced by the preceding temporal context. Human subjects participated in a temporal bisection task with no anchor durations (partition method). Intervals were demarcated by a Gaussian blob (visual condition) or burst of white noise (auditory condition) that persisted for one of seven logarithmically spaced sub-second intervals. Crucially, the order in which stimuli were presented was first-order counterbalanced, allowing us to measure the carryover effect of every successive combination of intervals. The results demonstrated a number of distinct findings. First, the perception of each interval was biased by the prior response, such that each interval was judged similarly to the preceding trial. Second, the perception of each interval was also influenced by the prior interval, such that perceived duration shifted away from the preceding interval. Additionally, the effect of decision bias was larger for visual intervals, whereas auditory intervals engendered greater perceptual carryover. We quantified these effects by designing a biologically-inspired computational model that measures noisy representations of time against an adaptive memory prior while simultaneously accounting for uncertainty, consistent with a Bayesian heuristic. We found that our model could account for all of the effects observed in human data. Additionally, our model could only accommodate both carryover effects when uncertainty and memory were calculated separately, suggesting separate neural representations for each. These findings demonstrate that time is susceptible to similar carryover effects as other basic stimulus attributes, and that the brain rapidly adapts to temporal context.
Collapse
Affiliation(s)
- Martin Wiener
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Psychology, George Mason University, Fairfax, Virginia, United States of America
| | - James C. Thompson
- Department of Psychology, George Mason University, Fairfax, Virginia, United States of America
| | - H. Branch Coslett
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
29
|
Elhilali M. Bayesian inference in auditory scenes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2013:2792-2795. [PMID: 24110307 PMCID: PMC5983886 DOI: 10.1109/embc.2013.6610120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The cocktail party problem is a multi-faceted challenge which encompasses various aspects of auditory perception. Its processes underlie the brain's ability to detect, identify and classify sound objects; to robustly represent and maintain speech intelligibility amidst severe distortions; and to guide actions and behaviors in line with complex goals and shifting acoustic soundscapes. Here, we present a perspective that considers the powerful Bayesian inference as a unifying framework to integrate the role of sensory cues as well as stimulus-driven priors and top-down schemas including attention.
Collapse
|