1
|
McIntire G, Dopkins S. Super-optimality and relative distance coding in location memory. Mem Cognit 2024; 52:1439-1450. [PMID: 38519780 DOI: 10.3758/s13421-024-01553-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2024] [Indexed: 03/25/2024]
Abstract
The prevailing model of landmark integration in location memory is Maximum Likelihood Estimation, which assumes that each landmark implies a target location distribution that is narrower for more reliable landmarks. This model assumes weighted linear combination of landmarks and predicts that, given optimal integration, the reliability with multiple landmarks is the sum of the reliabilities with the individual landmarks. Super-optimality is reliability with multiple landmarks exceeding optimal reliability given the reliability with each landmark alone; this is shown when performance exceeds predicted optimal performance, found by aggregating reliability values with single landmarks. Past studies claiming super-optimality have provided arguably impure measures of performance with single landmarks given that multiple landmarks were presented at study in conditions with a single landmark at test, disrupting encoding specificity and thereby leading to underestimation in predicted optimal performance. This study, unlike those prior studies, only presented a single landmark at study and the same landmark at test in single landmark trials, showing super-optimality conclusively. Given that super-optimal information integration occurs, emergent information, that is, information only available with multiple landmarks, must be used. With the target and landmarks all in a line, as throughout this study, relative distance is the only emergent information available. Use of relative distance was confirmed here by finding that, when both landmarks are left of the target at study, the target is remembered further right of its true location the further left the left landmark is moved from study to test.
Collapse
Affiliation(s)
- Gordon McIntire
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA.
| | - Stephen Dopkins
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA
| |
Collapse
|
2
|
Crane AL, Feyten LEA, Preagola AA, Ferrari MCO, Brown GE. Uncertainty about predation risk: a conceptual review. Biol Rev Camb Philos Soc 2024; 99:238-252. [PMID: 37839808 DOI: 10.1111/brv.13019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 10/17/2023]
Abstract
Uncertainty has long been of interest to economists and psychologists and has more recently gained attention among ecologists. In the ecological world, animals must regularly make decisions related to finding resources and avoiding threats. Here, we describe uncertainty as a perceptual phenomenon of decision-makers, and we focus specifically on the functional ecology of such uncertainty regarding predation risk. Like all uncertainty, uncertainty about predation risk reflects informational limitations. When cues are available, they may be novel (i.e. unknown information), incomplete, unreliable, overly abundant and complex, or conflicting. We review recent studies that have used these informational limitations to induce uncertainty of predation risk. These studies have typically used either over-responses to novelty (i.e. neophobia) or memory attenuation as proxies for measuring uncertainty. Because changes in the environment, particularly unpredictable changes, drive informational limitations, we describe studies assessing unpredictable variance in spatio-temporal predation risk, intensity of predation risk, predator encounter rate, and predator diversity. We also highlight anthropogenic changes within habitats that are likely to have dramatic impacts on information availability and thus uncertainty in antipredator decisions in the modern world.
Collapse
Affiliation(s)
- Adam L Crane
- WCVM, Biomedical Sciences, University of Saskatchewan, 52 Campus Dr., Saskatoon, SK, S7N 5B4, Canada
- Department of Biology, Concordia University, 7141 Sherbrooke St. W., Montreal, QC, H4B 1R6, Canada
| | - Laurence E A Feyten
- Department of Biology, Concordia University, 7141 Sherbrooke St. W., Montreal, QC, H4B 1R6, Canada
| | - Alexyz A Preagola
- Department of Biology, University of Saskatchewan, 112 Science Pl., Saskatoon, SK, S7N 5E2, Canada
| | - Maud C O Ferrari
- WCVM, Biomedical Sciences, University of Saskatchewan, 52 Campus Dr., Saskatoon, SK, S7N 5B4, Canada
| | - Grant E Brown
- Department of Biology, Concordia University, 7141 Sherbrooke St. W., Montreal, QC, H4B 1R6, Canada
| |
Collapse
|
3
|
Kemp JT, Cesanek E, Domini F. Perceiving depth from texture and disparity cues: Evidence for a non-probabilistic account of cue integration. J Vis 2023; 23:13. [PMID: 37486299 PMCID: PMC10382782 DOI: 10.1167/jov.23.7.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/12/2023] [Indexed: 07/25/2023] Open
Abstract
Bayesian inference theories have been extensively used to model how the brain derives three-dimensional (3D) information from ambiguous visual input. In particular, the maximum likelihood estimation (MLE) model combines estimates from multiple depth cues according to their relative reliability to produce the most probable 3D interpretation. Here, we tested an alternative theory of cue integration, termed the intrinsic constraint (IC) theory, which postulates that the visual system derives the most stable, not most probable, interpretation of the visual input amid variations in viewing conditions. The vector sum model provides a normative approach for achieving this goal where individual cue estimates are components of a multidimensional vector whose norm determines the combined estimate. Individual cue estimates are not accurate but related to distal 3D properties through a deterministic mapping. In three experiments, we show that the IC theory can more adeptly account for 3D cue integration than MLE models. In Experiment 1, we show systematic biases in the perception of depth from texture and depth from binocular disparity. Critically, we demonstrate that the vector sum model predicts an increase in perceived depth when these cues are combined. In Experiment 2, we illustrate the IC theory radical reinterpretation of the just noticeable difference (JND) and test the related vector sum model prediction of the classic finding of smaller JNDs for combined-cue versus single-cue stimuli. In Experiment 3, we confirm the vector sum prediction that biases found in cue integration experiments cannot be attributed to flatness cues, as the MLE model predicts.
Collapse
Affiliation(s)
- Jovan T Kemp
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
- Italian Institute of Technology, Rovereto, Italy
| |
Collapse
|
4
|
Domini F. The case against probabilistic inference: a new deterministic theory of 3D visual processing. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210458. [PMID: 36511407 PMCID: PMC9745883 DOI: 10.1098/rstb.2021.0458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
How the brain derives 3D information from inherently ambiguous visual input remains the fundamental question of human vision. The past two decades of research have addressed this question as a problem of probabilistic inference, the dominant model being maximum-likelihood estimation (MLE). This model assumes that independent depth-cue modules derive noisy but statistically accurate estimates of 3D scene parameters that are combined through a weighted average. Cue weights are adjusted based on the system representation of each module's output variability. Here I demonstrate that the MLE model fails to account for important psychophysical findings and, importantly, misinterprets the just noticeable difference, a hallmark measure of stimulus discriminability, to be an estimate of perceptual uncertainty. I propose a new theory, termed Intrinsic Constraint, which postulates that the visual system does not derive the most probable interpretation of the visual input, but rather, the most stable interpretation amid variations in viewing conditions. This goal is achieved with the Vector Sum model, which represents individual cue estimates as components of a multi-dimensional vector whose norm determines the combined output. This model accounts for the psychophysical findings cited in support of MLE, while predicting existing and new findings that contradict the MLE model. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Fulvio Domini
- CLPS, Brown University, 190 Thayer Street Providence, Rhode Island 02912-9067, USA
| |
Collapse
|
5
|
Adams H, Stefanucci J, Creem-Regehr S, Pointon G, Thompson W, Bodenheimer B. Shedding Light on Cast Shadows: An Investigation of Perceived Ground Contact in AR and VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4624-4639. [PMID: 34280102 DOI: 10.1109/tvcg.2021.3097978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Virtual objects in augmented reality (AR) often appear to float atop real world surfaces, which makes it difficult to determine where they are positioned in space. This is problematic as many applications for AR require accurate spatial perception. In the current study, we examine how the way we render cast shadows-which act as an important monocular depth cue for creating a sense of contact between an object and the surface beneath it-impacts spatial perception. Over two experiments, we evaluate people's sense of surface contact given both traditional and non-traditional shadow shading methods in optical see-through augmented reality (OST AR), video see-through augmented reality (VST AR), and virtual reality (VR) head-mounted displays. Our results provide evidence that nontraditional shading techniques for rendering shadows in AR displays may enhance the accuracy of one's perception of surface contact. This finding implies a possible tradeoff between photorealism and accuracy of depth perception, especially in OST AR displays. However, it also supports the use of more stylized graphics like non-traditional cast shadows to improve perception and interaction in AR applications.
Collapse
|
6
|
Scarfe P. Experimentally disambiguating models of sensory cue integration. J Vis 2022; 22:5. [PMID: 35019955 PMCID: PMC8762719 DOI: 10.1167/jov.22.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory cue integration is one of the primary areas in which a normative mathematical framework has been used to define the “optimal” way in which to make decisions based upon ambiguous sensory information and compare these predictions to behavior. The conclusion from such studies is that sensory cues are integrated in a statistically optimal fashion. However, numerous alternative computational frameworks exist by which sensory cues could be integrated, many of which could be described as “optimal” based on different criteria. Existing studies rarely assess the evidence relative to different candidate models, resulting in an inability to conclude that sensory cues are integrated according to the experimenter's preferred framework. The aims of the present paper are to summarize and highlight the implicit assumptions rarely acknowledged in testing models of sensory cue integration, as well as to introduce an unbiased and principled method by which to determine, for a given experimental design, the probability with which a population of observers behaving in accordance with one model of sensory integration can be distinguished from the predictions of a set of alternative models.
Collapse
Affiliation(s)
- Peter Scarfe
- Vision and Haptics Laboratory, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,
| |
Collapse
|
7
|
Feyten LEA, Crane AL, Ramnarine IW, Brown GE. Predation risk shapes the use of conflicting personal risk and social safety information in guppies. Behav Ecol 2021. [DOI: 10.1093/beheco/arab096] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Abstract
When faced with uncertainty, animals can benefit from using multiple sources of information in order to make an optimal decision. However, information sources (e.g., social and personal cues) may conflict, while also varying in acquisition cost and reliability. Here, we assessed behavioral decisions of Trinidadian guppies (Poecilia reticulata), in situ, when presented with conflicting social and personal information about predation risk. We positioned foraging arenas within high- and low-predation streams, where guppies were exposed to a personal cue in the form of conspecific alarm cues (a known indicator of risk), a novel cue, or a control. At the same time, a conspecific shoal (a social safety cue) was either present or absent. When social safety was absent, guppies in both populations showed typical avoidance responses towards alarm cues, and high-predation guppies showed their typical avoidance of novel cues (i.e., neophobia). However, the presence of social safety cues was persuasive, overriding the neophobia of high-predation guppies and emboldening low-predation guppies to ignore alarm cues. Our experiment is one of the first to empirically assess the use of safety and risk cues in prey and suggests a threshold level of ambient risk which dictates the use of conflicting social and personal information.
Collapse
Affiliation(s)
| | - Adam L Crane
- Department of Biology, Concordia University, West, Montreal, Québec, Canada
| | - Indar W Ramnarine
- Department of Life Sciences, University of the West Indies, St. Augustine, Trinidad and Tobago
| | - Grant E Brown
- Department of Biology, Concordia University, West, Montreal, Québec, Canada
| |
Collapse
|
8
|
Pladere T, Luguzis A, Zabels R, Smukulis R, Barkovska V, Krauze L, Konosonoka V, Svede A, Krumina G. When virtual and real worlds coexist: Visualization and visual system affect spatial performance in augmented reality. J Vis 2021; 21:17. [PMID: 34388233 PMCID: PMC8363769 DOI: 10.1167/jov.21.8.17] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 07/16/2021] [Indexed: 11/24/2022] Open
Abstract
New visualization approaches are being actively developed aiming to mitigate the effect of vergence-accommodation conflict in stereoscopic augmented reality; however, high interindividual variability in spatial performance makes it difficult to predict user gain. To address this issue, we investigated the effects of consistent and inconsistent binocular and focus cues on perceptual matching in the stereoscopic environment of augmented reality using a head-mounted display that was driven in multifocal and single focal plane modes. Participants matched the distance of a real object with images projected at three viewing distances, concordant with the display focal planes when driven in the multifocal mode. As a result, consistency of depth cues facilitated faster perceptual judgments on spatial relations. Moreover, the individuals with mild binocular and accommodative disorders benefited from the visualization of information on the focal planes corresponding to image planes more than individuals with normal vision, which was reflected in performance accuracy. Because symptoms and complaints may be absent when the functionality of the sensorimotor system is reduced, the results indicate the need for a detailed assessment of visual functions in research on spatial performance. This study highlights that the development of a visualization system that reduces visual stress and improves user performance should be a priority for the successful implementation of augmented reality displays.
Collapse
Affiliation(s)
- Tatjana Pladere
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Artis Luguzis
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
- Laboratory of Statistical Research and Data Analysis, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | | | | | - Viktorija Barkovska
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Linda Krauze
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Vita Konosonoka
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Aiga Svede
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Gunta Krumina
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| |
Collapse
|
9
|
Cabibihan JJ, Alhaddad AY, Gulrez T, Yoon WJ. Influence of Visual and Haptic Feedback on the Detection of Threshold Forces in a Surgical Grasping Task. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3068934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
10
|
A graph-theoretic approach to identifying acoustic cues for speech sound categorization. Psychon Bull Rev 2021; 27:1104-1125. [PMID: 32671571 DOI: 10.3758/s13423-020-01748-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human speech contains a wide variety of acoustic cues that listeners must map onto distinct phoneme categories. The large amount of information contained in these cues contributes to listeners' remarkable ability to accurately recognize speech across a variety of contexts. However, these cues vary across talkers, both in terms of how specific cue values map onto different phonemes and in terms of which cues individual talkers use most consistently to signal specific phonological contrasts. This creates a challenge for models that aim to characterize the information used to recognize speech. How do we balance the need to account for variability in speech sounds across a wide range of talkers with the need to avoid overspecifying which acoustic cues describe the mapping from speech sounds onto phonological distinctions? We present an approach using tools from graph theory that addresses this issue by creating networks describing connections between individual talkers and acoustic cues and by identifying subgraphs within these networks. This allows us to reduce the space of possible acoustic cues that signal a given phoneme to a subset that still accounts for variability across talkers, simplifying the model and providing insights into which cues are most relevant for specific phonemes. Classifiers trained on the subset of cue dimensions identified in the subgraphs provide fits to listeners' categorization that are similar to those obtained for classifiers trained on all cue dimensions, demonstrating that the subgraphs capture the cues necessary to categorize speech sounds.
Collapse
|
11
|
Falkenberg C, Faul F. Transparent layer constancy is improved by motion, stereo disparity, highly regular background pattern, and successive presentation. J Vis 2020; 19:16. [PMID: 31622475 DOI: 10.1167/19.12.16] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visual system uses figural and colorimetric regularities in the retinal image to recognize optical filters and to discern the properties of the transparent overlay from properties of the background. Previous work suggests that the perceived color and transmittance of the transparent layer vary less under illumination changes than it would be expected from corresponding changes in the input. Here, we tested how the degree of this approximate transparent layer constancy (TLC) depends on factors that presumably facilitate the decomposition into a filter and a background layer. Using an asymmetric filter matching task, we found that motion, stereo disparity, and a highly regular background pattern each contribute to the vividness of the transparency impression and the degree of TLC. Combining these cues led to a cumulative increase in TLC, suggesting a "strong fusion" cue integration process. We also tested objects with invalid figural conditions for transparency (T-junctions). The tendency to perceive these objects as opaque and to establish a proximal match increased the more conspicuous the violation of this figural condition was. Furthermore, we investigated the gain in TLC due to alternating presentation. Alternating presentation enhanced TLC and color constancy to a comparable degree, and our results suggest that adaptation contributes to this effect.
Collapse
Affiliation(s)
| | - Franz Faul
- Institut für Psychologie, Universität Kiel, Germany
| |
Collapse
|
12
|
Galle ME, Klein-Packard J, Schreiber K, McMurray B. What Are You Waiting For? Real-Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory. Cogn Sci 2020; 43. [PMID: 30648798 DOI: 10.1111/cogs.12700] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 10/15/2018] [Accepted: 10/25/2018] [Indexed: 11/30/2022]
Abstract
Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150-350 ms later), but also integration across different frequency bands and compensation for contextual factors like coarticulation. Eye movements in the visual world paradigm showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.
Collapse
Affiliation(s)
- Marcus E Galle
- Department of Psychological and Brain Sciences, University of Iowa
| | | | | | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa.,Department of Communication Sciences and Disorders, University of Iowa.,Department of Linguistics, University of Iowa.,Department of Otolaryngology, University of Iowa
| |
Collapse
|
13
|
Broadbent H, Osborne T, Mareschal D, Kirkham N. Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-cross-sensory cues in children's incidental category learning. Cognition 2020; 199:104202. [PMID: 32087397 DOI: 10.1016/j.cognition.2020.104202] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/09/2020] [Accepted: 01/22/2020] [Indexed: 10/25/2022]
Abstract
Simultaneous presentation of multisensory cues has been found to facilitate children's learning to a greater extent than unisensory cues (e.g., Broadbent, White, Mareschal, & Kirkham, 2017). Current research into children's multisensory learning, however, does not address whether these findings are due to having multiple cross-sensory cues that enhance stimuli perception or a matter of having multiple cues, regardless of modality, that are informative to category membership. The current study examined the role of multiple cross-sensory cues (e.g., audio-visual) compared to multiple intra-sensory cues (e.g., two visual cues) on children's incidental category learning. On a computerized incidental category learning task, children aged six to ten years (N = 454) were allocated to either a visual-only (V: unisensory), auditory-only (A: unisensory), audio-visual (AV: multisensory), Visual-Visual (VV: multi-cue) or Auditory-Auditory (AA: multi-cue) condition. In children over eight years of age, the availability of two informative cues, regardless of whether they had been presented across two different modalities or within the same modality, was found to be more beneficial to incidental learning than with unisensory cues. In six-year-olds, however, the presence of multiple auditory cues (AA) did not facilitate learning to the same extent as multiple visual cues (VV) or when cues were presented across two different modalities (AV). The findings suggest that multiple sensory cues presented across or within modalities may have differential effects on children's incidental learning across middle childhood, depending on the sensory domain in which they are presented. Implications for the use of multi-cross-sensory and multiple-intra-sensory cues for children's learning across this age range are discussed.
Collapse
Affiliation(s)
- H Broadbent
- Royal Holloway, University of London, United Kingdom of Great Britain and Northern Ireland; Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland.
| | - T Osborne
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - D Mareschal
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - N Kirkham
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
14
|
Agopyan H, Griffet J, Poirier T, Bredin J. Modification of knee flexion during walking with use of a real-time personalized avatar. Heliyon 2019; 5:e02797. [PMID: 31844726 PMCID: PMC6895732 DOI: 10.1016/j.heliyon.2019.e02797] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 09/09/2019] [Accepted: 10/31/2019] [Indexed: 11/16/2022] Open
Abstract
Visual feedback is used in different research areas, including clinical science and neuroscience. In this study, we investigated the influence of the visualization of a real-time personalized avatar on gait parameters, focusing on knee flexion during the swing phase. We also studied the impact of the modification of avatar's knee amplitude on kinematic of the knee of healthy subjects. For this purpose, we used an immersive reality treadmill equipment and developed a 3D avatar, with instantly modifiable parameters for knee flexion and extension (acceleration or deceleration). Fourteen healthy young adults, equipped with motion capture markers, were asked to walk at a self-selected pace on the treadmill. A real-time 3D image of their lower limbs was modelized and projected on the screen ahead of them, as if in a walking motion from left to right. The subjects were instructed to continue walking. When we initiated an increase in the knee flexion of the avatar, we observed a similar increase in the subjects' knee flexion. No significant results were observed when the modification involved a decrease in knee flexion. The results and their significance are discussed using theories encompassing empathy, sympathy and sensory re-calibration. The prospect of using this type of modified avatar for stroke rehabilitation is discussed.
Collapse
Affiliation(s)
- H Agopyan
- Université côte d'azur, LAMHESS, Nice, France
| | - J Griffet
- Chirurgie Orthopédique Pédiatrique, Hôpital Couple Enfant, Centre Hospitalier Universitaire de Grenoble, BP 217, 38043 Grenoble cedex 9, France
| | | | - J Bredin
- Université côte d'azur, LAMHESS, Nice, France.,Centre de Santé Institut Rossetti-PEP06, Unité Clinique d'Analyse du Mouvement, 400, bld de la Madeleine, 06000 Nice, France
| |
Collapse
|
15
|
Bejjanki VR, Randrup ER, Aslin RN. Young children combine sensory cues with learned information in a statistically efficient manner: But task complexity matters. Dev Sci 2019; 23:e12912. [PMID: 31608526 DOI: 10.1111/desc.12912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 07/31/2019] [Accepted: 10/08/2019] [Indexed: 11/29/2022]
Abstract
Human adults are adept at mitigating the influence of sensory uncertainty on task performance by integrating sensory cues with learned prior information, in a Bayes-optimal fashion. Previous research has shown that young children and infants are sensitive to environmental regularities, and that the ability to learn and use such regularities is involved in the development of several cognitive abilities. However, it has also been reported that children younger than 8 do not combine simultaneously available sensory cues in a Bayes-optimal fashion. Thus, it remains unclear whether, and by what age, children can combine sensory cues with learned regularities in an adult manner. Here, we examine the performance of 6- to 7-year-old children when tasked with localizing a 'hidden' target by combining uncertain sensory information with prior information learned over repeated exposure to the task. We demonstrate that 6- to 7-year-olds learn task-relevant statistics at a rate on par with adults, and like adults, are capable of integrating learned regularities with sensory information in a statistically efficient manner. We also show that variables such as task complexity can influence young children's behavior to a greater extent than that of adults, leading their behavior to look sub-optimal. Our findings have important implications for how we should interpret failures in young children's ability to carry out sophisticated computations. These 'failures' need not be attributed to deficits in the fundamental computational capacity available to children early in development, but rather to ancillary immaturities in general cognitive abilities that mask the operation of these computations in specific situations.
Collapse
Affiliation(s)
- Vikranth R Bejjanki
- Department of Psychology, Hamilton College, Clinton, NY, USA.,Program in Neuroscience, Hamilton College, Clinton, NY, USA
| | - Emily R Randrup
- Department of Psychology, Hamilton College, Clinton, NY, USA
| | | |
Collapse
|
16
|
Gloriani AH, Schütz AC. Humans Trust Central Vision More Than Peripheral Vision Even in the Dark. Curr Biol 2019; 29:1206-1210.e4. [PMID: 30905606 PMCID: PMC6453110 DOI: 10.1016/j.cub.2019.02.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 01/15/2019] [Accepted: 02/06/2019] [Indexed: 11/07/2022]
Abstract
Two types of photoreceptors in the human retina support vision across a wide range of luminances: cones are active under bright daylight illumination (photopic viewing) and rods under dim illumination at night (scotopic viewing). These photoreceptors are distributed inhomogeneously across the retina [1]: cone-receptor density peaks at the center of the visual field (i.e., the fovea) and declines toward the periphery, allowing for high-acuity vision at the fovea in daylight. Rod receptors are absent from the fovea, leading to a functional foveal scotoma in night vision. In order to make optimal perceptual decisions, the visual system requires knowledge about its own properties and the relative reliability of signals arriving from different parts of the visual field [2]. Since cone and rod signals converge on the same pathways [3], and their cortical processing is similar except for the foveal scotoma [4], it is unclear if humans can take into account the differences between scotopic and photopic vision when making perceptual decisions. Here, we show that the scotopic foveal scotoma is filled in with information from the immediate surround and that humans trust this inferred information more than veridical information from the periphery of the visual field. We observed a similar preference under daylight illumination, indicating that humans have a default preference for information from the fovea even if this information is not veridical, like in night vision. This suggests that filling-in precedes the estimation of confidence, thereby shielding awareness from the foveal scotoma with respect to its contents and its properties. Veridical information from the fovea is preferred under photopic viewing Information missing in the scotopic foveal scotoma is filled in from the surround Inferred information from the fovea is preferred under scotopic viewing Content and properties of the foveal scotopic scotoma are hidden from awareness
Collapse
Affiliation(s)
- Alejandro H Gloriani
- Department of Psychology, University of Marburg, Gutenbergstr. 18, 35032 Marburg, Germany
| | - Alexander C Schütz
- Department of Psychology, University of Marburg, Gutenbergstr. 18, 35032 Marburg, Germany.
| |
Collapse
|
17
|
Ryu D, Oh S. The effect of good continuation on the contact order judgment of causal events. J Vis 2018; 18:5. [PMID: 30347092 DOI: 10.1167/18.11.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When a ball on a pool table moves to hit another ball, people feel the causal impression between the two balls: The first ball causes the second ball's motion, which is known as the launching effect. Previous research has shown that the causal impression becomes stronger when the two balls have a similar direction of movement. Here, we tested whether this good continuation influenced perception of the contact time between the causal object and the effect object. A variant of Michotte's visual collision event was used as a stimulus, consisting of two competing cause objects and one effect object. In the display, the two cause objects on the left begin to move and contact the effect object in the center, causing it to move. In Experiments 1 to 4, the contact order of the cause objects and the motion direction of the effect object were systematically varied. The observers were asked to judge which of the cause objects had a more causal relationship and made contact first. The results showed that the observers were more likely to judge a cause object as having a more causal relationship with the effect object when there was good continuation, and they often erroneously judged the cause object as having first contacted the effect object; this effect was maintained with up to approximately 100 ms of delay after contact. These results suggest that good continuation is an important cue that postdictively determines perception of the contact time of a cause object in a short time window.
Collapse
Affiliation(s)
- Daehyun Ryu
- Department of Psychology, Seoul National University, Seoul, South Korea
| | - Songjoo Oh
- Department of Psychology, Seoul National University, Seoul, South Korea
| |
Collapse
|
18
|
Mikula L, Gaveau V, Pisella L, Khan AZ, Blohm G. Learned rather than online relative weighting of visual-proprioceptive sensory cues. J Neurophysiol 2018; 119:1981-1992. [PMID: 29465322 DOI: 10.1152/jn.00338.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.
Collapse
Affiliation(s)
- Laura Mikula
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France.,School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Valérie Gaveau
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Laure Pisella
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Aarlenne Z Khan
- School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
| |
Collapse
|
19
|
Billino J, Drewing K. Age Effects on Visuo-Haptic Length Discrimination: Evidence for Optimal Integration of Senses in Senior Adults. Multisens Res 2018; 31:273-300. [PMID: 31264626 DOI: 10.1163/22134808-00002601] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 07/25/2017] [Indexed: 11/19/2022]
Abstract
Demographic changes in most developed societies have fostered research on functional aging. While cognitive changes have been characterized elaborately, understanding of perceptual aging lacks behind. We investigated age effects on the mechanisms of how multiple sources of sensory information are merged into a common percept. We studied visuo-haptic integration in a length discrimination task. A total of 24 young (20-25 years) and 27 senior (69-77 years) adults compared standard stimuli to appropriate sets of comparison stimuli. Standard stimuli were explored under visual, haptic, or visuo-haptic conditions. The task procedure allowed introducing an intersensory conflict by anamorphic lenses. Comparison stimuli were exclusively explored haptically. We derived psychometric functions for each condition, determining points of subjective equality and discrimination thresholds. We notably evaluated visuo-haptic perception by different models of multisensory processing, i.e., the Maximum-Likelihood-Estimate model of optimal cue integration, a suboptimal integration model, and a cue switching model. Our results support robust visuo-haptic integration across the adult lifespan. We found suboptimal weighted averaging of sensory sources in young adults, however, senior adults exploited differential sensory reliabilities more efficiently to optimize thresholds. Indeed, evaluation of the MLE model indicates that young adults underweighted visual cues by more than 30%; in contrast, visual weights of senior adults deviated only by about 3% from predictions. We suggest that close to optimal multisensory integration might contribute to successful compensation for age-related sensory losses and provides a critical resource. Differentiation between multisensory integration during healthy aging and age-related pathological challenges on the sensory systems awaits further exploration.
Collapse
Affiliation(s)
- Jutta Billino
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Knut Drewing
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| |
Collapse
|
20
|
Buxó-Lugo A, Toscano JC, Watson DG. Effects of Participant Engagement on Prosodic Prominence. DISCOURSE PROCESSES 2018; 55:305-323. [PMID: 31097846 DOI: 10.1080/0163853x.2016.1240742] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
It is generally assumed that prosodic cues that provide linguistic information, like discourse status, are driven primarily by the information structure of the conversation. This article investigates whether speakers have the capacity to adjust subtle acoustic-phonetic properties of the prosodic signal when they find themselves in contexts in which accurate communication is important. Thus, we examine whether the communicative context, in addition to discourse structure, modulates prosodic choices when speakers produce acoustic prominence. We manipulated the discourse status of target words in the context of a highly communicative task (i.e., working with a partner to solve puzzles in the computer game Minecraft) and in the context of a less communicative task more typical of psycholinguistic experiments (i.e., picture description). Speakers in the more communicative task produced prosodic cues to discourse structure that were more discriminable than those in the less communicative task. In a second experiment, we found that the presence or absence of a conversational partner drove some, but not all, of these effects. Together, these results suggest that speakers can modulate the prosodic signal in response to the communicative and social context.
Collapse
Affiliation(s)
- Andrés Buxó-Lugo
- Department of Psychology and Beckman InstituteUniversity of Illinois at Urbana-Champaign, Champaign, Illinois, USA
| | - Joseph C Toscano
- Department of Psychology, Villanova University, Villanova, Pennsylvania, USA
| | - Duane G Watson
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, Champaign, Illinois, USA
| |
Collapse
|
21
|
Fulvio JM, Rokers B. Use of cues in virtual reality depends on visual feedback. Sci Rep 2017; 7:16009. [PMID: 29167491 PMCID: PMC5700175 DOI: 10.1038/s41598-017-16161-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 11/07/2017] [Indexed: 11/29/2022] Open
Abstract
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Collapse
Affiliation(s)
- Jacqueline M Fulvio
- Department of Psychology, McPherson Eye Research Institute University of Wisconsin - Madison, Madison, USA.
| | - Bas Rokers
- Department of Psychology, McPherson Eye Research Institute University of Wisconsin - Madison, Madison, USA
| |
Collapse
|
22
|
Abstract
Where textures are defined by repetitive small spatial structures, exploration covering a greater extent will lead to signal repetition. We investigated how sensory estimates derived from these signals are integrated. In Experiment 1, participants stroked with the index finger one to eight times across two virtual gratings. Half of the participants discriminated according to ridge amplitude, the other half according to ridge spatial period. In both tasks, just noticeable differences (JNDs) decreased with an increasing number of strokes. Those gains from additional exploration were more than three times smaller than predicted for optimal observers who have access to equally reliable, and therefore equally weighted, estimates for the entire exploration. We assume that the sequential nature of the exploration leads to memory decay of sensory estimates. Thus, participants compare an overall estimate of the first stimulus, which is affected by memory decay, to stroke-specific estimates during the exploration of the second stimulus. This was tested in Experiments 2 and 3. The spatial period of one stroke across either the first or second of two sequentially presented gratings was slightly discrepant from periods in all other strokes. This allowed calculating weights of stroke-specific estimates in the overall percept. As predicted, weights were approximately equal for all strokes in the first stimulus, while weights decreased during the exploration of the second stimulus. A quantitative Kalman filter model of our assumptions was consistent with the data. Hence, our results support an optimal integration model for sequential information given that memory decay affects comparison processes.
Collapse
|
23
|
Bankieris KR, Bejjanki VR, Aslin RN. Sensory cue-combination in the context of newly learned categories. Sci Rep 2017; 7:10890. [PMID: 28883455 PMCID: PMC5589839 DOI: 10.1038/s41598-017-11341-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Accepted: 08/15/2017] [Indexed: 11/09/2022] Open
Abstract
A large body of prior research has evaluated how humans combine multiple sources of information pertaining to stimuli drawn from continuous dimensions, such as distance or size. These prior studies have repeatedly demonstrated that in these circumstances humans integrate cues in a near-optimal fashion, weighting cues according to their reliability. However, most of our interactions with sensory information are in the context of categories such as objects and phonemes, thereby requiring a solution to the cue combination problem by mapping sensory estimates from continuous dimensions onto task-relevant categories. Previous studies have examined cue combination with natural categories (e.g., phonemes), providing qualitative evidence that human observers utilize information about the distributional properties of task-relevant categories, in addition to sensory information, in such categorical cue combination tasks. In the present study, we created and taught human participants novel audiovisual categories, thus allowing us to quantitatively evaluate participants’ integration of sensory and categorical information. Comparing participant behavior to the predictions of a statistically optimal observer that ideally combines all available sources of information, we provide the first evidence, to our knowledge, that human observers combine sensory and category information in a statistically optimal manner.
Collapse
Affiliation(s)
- Kaitlyn R Bankieris
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, USA.
| | | | - Richard N Aslin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, USA
| |
Collapse
|
24
|
Effect of Context on the Contribution of Individual Harmonics to Residue Pitch. J Assoc Res Otolaryngol 2017; 18:803-813. [PMID: 28755308 PMCID: PMC5688044 DOI: 10.1007/s10162-017-0636-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Accepted: 07/06/2017] [Indexed: 11/21/2022] Open
Abstract
There is evidence that the contribution of a given harmonic in a complex tone to residue pitch is influenced by the accuracy with which the frequency of that harmonic is encoded. The present study investigated whether listeners adjust the weights assigned to individual harmonics based on acquired knowledge of the reliability of the frequency estimates of those harmonics. In a two-interval forced-choice task, seven listeners indicated which of two 12-harmonic complex tones had the higher overall pitch. In context trials (60 % of all trials), the fundamental frequency (F0) was 200 Hz in one interval and 200 + ΔF0 Hz in the other. In different (blocked) conditions, either the 3rd or the 4th harmonic (plus the 7th, 9th, and 12th harmonics), were replaced by narrowband noises that were identical in the two intervals. Feedback was provided. In randomly interspersed test trials (40 % of all trials), the fundamental frequency was 200 + ΔF0/2 Hz in both intervals; in the second interval, either the third or the fourth harmonic was shifted slightly up or down in frequency with equal probability. There were no narrowband noises. Feedback was not provided. The results showed that substitution of a harmonic by noise in context trials reduced the contribution of that harmonic to pitch judgements in the test trials by a small but significant amount. This is consistent with the notion that listeners give smaller weight to a harmonic or frequency region when they have learned that this frequency region does not provide reliable information for a given task.
Collapse
|
25
|
Igarashi Y, Omori K, Arai T, Aizawa Y. Illusory visual-depth reversal can modulate sensations of contact surface. Exp Brain Res 2017; 235:3013-3022. [PMID: 28721518 DOI: 10.1007/s00221-017-5034-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Accepted: 07/15/2017] [Indexed: 11/24/2022]
Abstract
To perceive the external world stably, humans must integrate and manage continuous streams of information from various sensory modalities, in addition to drawing on past experiences and knowledge. In this study, we introduce a novel visuo-tactile illusion elicited by a visual-depth-reversal stimulus. The stimulus (a model of a building) was constructed so as to produce the same retinal image as an opaque cuboid, although it actually consisted of only three PVC boards forming a three-dimensional corner with the hollow inside facing the observer. Participants holding the model in their palm, therefore, observed, with both eyes or one eye, a building model that could be interpreted as either a concave or a convex cuboid. That is, tactile information from the contact surface contradicted the visual interpretation of a convex cuboid. Questionnaire and experimental results, however, showed that the building model was stably viewed as a standing cuboid, particularly under monocular observation. Participants also reported feeling a stable touch of the shrinking base of the apparently standing building model, thus ignoring the veridical contact surface. Given that the visual-tactile information was unchanged with or without the illusion and that the experimental task was tactile estimation, it is remarkable that participants failed to perceive actual touch based on the object's appearance. Results indicate the complexity and flexibility of visual-tactile integration processes. We also discuss the possibility that object knowledge influences visual-tactile integration.
Collapse
Affiliation(s)
- Yuka Igarashi
- Department of Human Science, Faculty of Human Sciences, Kanagawa University, 3-27-1, Rokkakubashi, Kanagawa, Yokohama, Kanagawa, 221-8686, Japan.
| | - Keiko Omori
- College of Humanities and Sciences, Nihon University, Tokyo, Japan
| | - Tetsuya Arai
- Department of Human Science, Faculty of Human Sciences, Kanagawa University, 3-27-1, Rokkakubashi, Kanagawa, Yokohama, Kanagawa, 221-8686, Japan.,Faculty of Human Sciences, Bunkyo University, Koshigaya, Japan
| | - Yasunori Aizawa
- College of Humanities and Sciences, Nihon University, Tokyo, Japan
| |
Collapse
|
26
|
Regier T, Xu Y. The Sapir‐Whorf hypothesis and inference under uncertainty. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2017; 8. [DOI: 10.1002/wcs.1440] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2016] [Revised: 01/09/2017] [Accepted: 01/17/2017] [Indexed: 01/29/2023]
Affiliation(s)
- Terry Regier
- Department of Linguistics, Cognitive Science ProgramUniversity of CaliforniaBerkeleyCAUSA
| | - Yang Xu
- Department of Linguistics, Cognitive Science ProgramUniversity of CaliforniaBerkeleyCAUSA
| |
Collapse
|
27
|
Xu Y, Regier T, Newcombe NS. An adaptive cue combination model of human spatial reorientation. Cognition 2017; 163:56-66. [PMID: 28285237 DOI: 10.1016/j.cognition.2017.02.016] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2016] [Revised: 02/17/2017] [Accepted: 02/28/2017] [Indexed: 11/30/2022]
Abstract
Previous research has proposed an adaptive cue combination view of the development of human spatial reorientation (Newcombe & Huttenlocher, 2006), whereby information from multiple sources is combined in a weighted fashion in localizing a target, as opposed to being modular and encapsulated (Hermer & Spelke, 1996). However, no prior work has formalized this proposal and tested it against existing empirical data. We propose a computational model of human spatial reorientation that is motivated by probabilistic approaches to optimal perceptual cue integration (e.g. Ernst & Banks, 2002) and to spatial location coding (Huttenlocher, Hedges, & Duncan, 1991). We show that this model accounts for data from a variety of human reorientation experiments, providing support for the adaptive combination view of reorientation.
Collapse
Affiliation(s)
- Yang Xu
- Department of Linguistics, Cognitive Science Program, University of California, Berkeley, CA 94720-2650, USA.
| | - Terry Regier
- Department of Linguistics, Cognitive Science Program, University of California, Berkeley, CA 94720-2650, USA
| | - Nora S Newcombe
- Department of Psychology, 318 Weiss Hall, Temple University, Philadelphia, PA 19122, USA
| |
Collapse
|
28
|
Yurovsky D, Frank MC. Beyond naïve cue combination: salience and social cues in early word learning. Dev Sci 2017; 20:10.1111/desc.12349. [PMID: 26575408 PMCID: PMC4870162 DOI: 10.1111/desc.12349] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Accepted: 06/17/2015] [Indexed: 11/29/2022]
Abstract
Children learn their earliest words through social interaction, but it is unknown how much they rely on social information. Some theories argue that word learning is fundamentally social from its outset, with even the youngest infants understanding intentions and using them to infer a social partner's target of reference. In contrast, other theories argue that early word learning is largely a perceptual process in which young children map words onto salient objects. One way of unifying these accounts is to model word learning as weighted cue combination, in which children attend to many potential cues to reference, but only gradually learn the correct weight to assign each cue. We tested four predictions of this kind of naïve cue combination account, using an eye-tracking paradigm that combines social word teaching and two-alternative forced-choice testing. None of the predictions were supported. We thus propose an alternative unifying account: children are sensitive to social information early, but their ability to gather and deploy this information is constrained by domain-general cognitive processes. Developmental changes in children's use of social cues emerge not from learning the predictive power of social cues, but from the gradual development of attention, memory, and speed of information processing.
Collapse
|
29
|
van Ee R, Van de Cruys S, Schlangen LJ, Vlaskamp BN. Circadian-Time Sickness: Time-of-Day Cue-Conflicts Directly Affect Health. Trends Neurosci 2016; 39:738-749. [DOI: 10.1016/j.tins.2016.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 09/06/2016] [Accepted: 09/13/2016] [Indexed: 10/20/2022]
|
30
|
Abstract
Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.
Collapse
|
31
|
The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color. PLoS One 2016; 11:e0158725. [PMID: 27434643 PMCID: PMC4951127 DOI: 10.1371/journal.pone.0158725] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 06/21/2016] [Indexed: 11/19/2022] Open
Abstract
The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.
Collapse
|
32
|
Montagne C, Zhou Y. Visual capture of a stereo sound: Interactions between cue reliability, sound localization variability, and cross-modal bias. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:471. [PMID: 27475171 DOI: 10.1121/1.4955314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multisensory interactions involve coordination and sometimes competition between multiple senses. Vision usually dominates audition in spatial judgments when light and sound stimuli are presented from two different physical locations. This study investigated the influence of vision on the perceived location of a phantom sound source placed in a stereo sound field using a pair of loudspeakers emitting identical signals that were delayed or attenuated relative to each other. Results show that although a similar horizontal range (+/-45°) was reported for timing-modulated and level-modulated signals, listeners' localization performance showed greater variability for the timing signals. When visual stimuli were presented simultaneously with the auditory stimuli, listeners showed stronger visual bias for timing-modulated signals than level-modulated and single-speaker control signals. Trial-to-trial errors remained relatively stable over time, suggesting that sound localization uncertainty has an immediate and long-lasting effect on the across-modal bias. Binaural signal analyses further reveal that interaural differences of time and intensity-the two primary cues for sound localization in the azimuthal plane-are inherently more ambiguous for signals placed using timing. These results suggest that binaural ambiguity is intrinsically linked with localization variability and the strength of cross-modal bias in sound localization.
Collapse
Affiliation(s)
- Christopher Montagne
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| | - Yi Zhou
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
33
|
Albrecht T, Mattler U. Individually different weighting of multiple processes underlies effects of metacontrast masking. Conscious Cogn 2016; 42:162-180. [PMID: 27010825 DOI: 10.1016/j.concog.2016.03.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Revised: 03/03/2016] [Accepted: 03/05/2016] [Indexed: 11/30/2022]
Abstract
Metacontrast masking occurs when a mask follows a target stimulus in close spatial proximity. Target visibility varies with stimulus onset asynchrony (SOA) between target and mask in individually different ways leading to different masking functions with corresponding phenomenological reports. We used individual differences to determine the processes that underlie metacontrast masking. We assessed individual masking functions in a masked target discrimination task using different masking conditions and applied factor-analytical techniques on measures of sensitivity. Results yielded two latent variables that (1) contribute to performance with short and long SOA, respectively, (2) relate to specific stimulus features, and (3) differentially correlate with specific subjective percepts. We propose that each latent variable reflects a specific process. Two additional processes may contribute to performance with short and long SOAs, respectively. Discrimination performance in metacontrast masking results from individually different weightings of two to four processes, each of which contributes to specific subjective percepts.
Collapse
Affiliation(s)
- Thorsten Albrecht
- Georg-Elias-Müller-Institute of Psychology, Georg-August University Göttingen, Germany.
| | - Uwe Mattler
- Georg-Elias-Müller-Institute of Psychology, Georg-August University Göttingen, Germany
| |
Collapse
|
34
|
Martin AE. Language Processing as Cue Integration: Grounding the Psychology of Language in Perception and Neurophysiology. Front Psychol 2016; 7:120. [PMID: 26909051 PMCID: PMC4754405 DOI: 10.3389/fpsyg.2016.00120] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Accepted: 01/22/2016] [Indexed: 12/25/2022] Open
Abstract
I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
Collapse
Affiliation(s)
- Andrea E. Martin
- Department of Psychology, School of Philosophy, Psychology and Language Sciences, University of EdinburghEdinburgh, UK
| |
Collapse
|
35
|
Kuperberg GR. Separate streams or probabilistic inference? What the N400 can tell us about the comprehension of events. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 31:602-616. [PMID: 27570786 PMCID: PMC4996121 DOI: 10.1080/23273798.2015.1130233] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Since the early 2000s, several ERP studies have challenged the assumption that we always use syntactic contextual information to influence semantic processing of incoming words, as reflected by the N400 component. One approach for explaining these findings is to posit distinct semantic and syntactic processing mechanisms, each with distinct time courses. While this approach can explain specific datasets, it cannot account for the wider body of findings. I propose an alternative explanation: a dynamic generative framework in which our goal is to infer the underlying event that best explains the set of inputs encountered at any given time. Within this framework, combinations of semantic and syntactic cues with varying reliabilities are used as evidence to weight probabilistic hypotheses about this event. I further argue that the computational principles of this framework can be extended to understand how we infer situation models during discourse comprehension, and intended messages during spoken communication.
Collapse
Affiliation(s)
- Gina R Kuperberg
- Department of Psychology and Center for Cognitive Science, Tufts University
| |
Collapse
|
36
|
|
37
|
Environmental stability modulates the role of path integration in human navigation. Cognition 2015; 142:96-109. [DOI: 10.1016/j.cognition.2015.05.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Revised: 05/10/2015] [Accepted: 05/11/2015] [Indexed: 11/19/2022]
|
38
|
Zhao M, Warren WH. How You Get There From Here. Psychol Sci 2015; 26:915-24. [DOI: 10.1177/0956797615574952] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 02/03/2015] [Indexed: 11/17/2022] Open
Abstract
How do people combine their sense of direction with their use of visual landmarks during navigation? Cue-integration theory predicts that such cues will be optimally integrated to reduce variability, whereas cue-competition theory predicts that one cue will dominate the response direction. We tested these theories by measuring both accuracy and variability in a homing task while manipulating information about path integration and visual landmarks. We found that the two cues were near-optimally integrated to reduce variability, even when landmarks were shifted up to 90°. Yet the homing direction was dominated by a single cue, which switched from landmarks to path integration when landmark shifts were greater than 90°. These findings suggest that cue integration and cue competition govern different aspects of the homing response: Cues are integrated to reduce response variability but compete to determine the response direction. The results are remarkably similar to data on animal navigation, which implies that visual landmarks reset the orientation, but not the precision, of the path-integration system.
Collapse
Affiliation(s)
- Mintao Zhao
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University
| | - William H. Warren
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University
| |
Collapse
|
39
|
Kleinschmidt DF, Jaeger TF. Robust speech perception: recognize the familiar, generalize to the similar, and adapt to the novel. Psychol Rev 2015; 122:148-203. [PMID: 25844873 PMCID: PMC4744792 DOI: 10.1037/a0038695] [Citation(s) in RCA: 250] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker's /p/ might be physically indistinguishable from another talker's /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively nonstationary world and propose that the speech perception system overcomes this challenge by (a) recognizing previously encountered situations, (b) generalizing to other situations based on previous similar experience, and (c) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (a) to (c) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on 2 critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires that listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these 2 aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension.
Collapse
Affiliation(s)
| | - T Florian Jaeger
- Departments of Brain and Cognitive Sciences, Computer Science, and Linguistics, University of Rochester
| |
Collapse
|
40
|
Cristino F, Davitt L, Hayward WG, Leek EC. Stereo disparity facilitates view generalization during shape recognition for solid multipart objects. Q J Exp Psychol (Hove) 2015; 68:2419-36. [PMID: 25679983 DOI: 10.1080/17470218.2015.1017512] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.
Collapse
Affiliation(s)
| | - Lina Davitt
- a School of Psychology , Bangor University , Bangor , UK
| | - William G Hayward
- b School of Psychology , University of Auckland , Auckland , New Zealand
| | - E Charles Leek
- c Wolfson Centre for Clinical and Cognitive Neuroscience, School of Psychology , Bangor University , Bangor , UK
| |
Collapse
|
41
|
Cazettes F, Fischer BJ, Pena JL. Spatial cue reliability drives frequency tuning in the barn Owl's midbrain. eLife 2014; 3:e04854. [PMID: 25531067 PMCID: PMC4291741 DOI: 10.7554/elife.04854] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 12/21/2014] [Indexed: 11/13/2022] Open
Abstract
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.
Collapse
Affiliation(s)
- Fanny Cazettes
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| | - Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, United States
| | - Jose L Pena
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| |
Collapse
|
42
|
Myers TD. Achieving external validity in home advantage research: generalizing crowd noise effects. Front Psychol 2014; 5:532. [PMID: 24917839 PMCID: PMC4041073 DOI: 10.3389/fpsyg.2014.00532] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 05/14/2014] [Indexed: 11/17/2022] Open
Abstract
Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.
Collapse
Affiliation(s)
- Tony D Myers
- Physical Education and Sports Studies, Newman University Birmingham, UK
| |
Collapse
|
43
|
Sejnowski TJ, Poizner H, Lynch G, Gepshtein S, Greenspan RJ. Prospective Optimization. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2014; 102:10.1109/JPROC.2014.2314297. [PMID: 25328167 PMCID: PMC4201124 DOI: 10.1109/jproc.2014.2314297] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications.
Collapse
Affiliation(s)
- Terrence J Sejnowski
- Howard Hughes Medical Institute, Salk Institute for Biological Sciences, La Jolla, CA 92037 USA and the Division of Biological Studies, University of California at San Diego, La Jolla, CA 92093 USA )
| | - Howard Poizner
- Institute for Neural Computation, University of California at San Diego, La Jolla, CA 92093-0523 USA ( )
| | - Gary Lynch
- Department of Psychiatry and Human Behavior, University of California at Irvine, Irvine, CA 92697-4292 USA ( )
| | - Sergei Gepshtein
- Systems Neurobiology Laboratories, Salk Institute for University of California at San Diego, La Jolla, CA 92037 USA ( )
| | - Ralph J Greenspan
- Kavli Institute for Brain and Mind, University of California at San Diego, La Jolla, CA 92093-0126 USA ( )
| |
Collapse
|
44
|
Lyons IM, Huttenlocher J, Ratliff KR. The Influence of Cue Reliability and Cue Representation on Spatial Reorientation in Young Children. JOURNAL OF COGNITION AND DEVELOPMENT 2014. [DOI: 10.1080/15248372.2012.736110] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
45
|
Cellini C, Kaim L, Drewing K. Visual and haptic integration in the estimation of softness of deformable objects. Iperception 2013; 4:516-31. [PMID: 25165510 PMCID: PMC4129386 DOI: 10.1068/i0598] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Revised: 11/14/2013] [Indexed: 11/14/2022] Open
Abstract
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme.
Collapse
Affiliation(s)
- Cristiano Cellini
- Department of General Psychology, Justus-Liebig-University of Giessen, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany; e-mail:
| | - Lukas Kaim
- Department of General Psychology, Justus-Liebig-University of Giessen, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany; e-mail:
| | - Knut Drewing
- Department of General Psychology, Justus-Liebig-University of Giessen, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany; e-mail:
| |
Collapse
|
46
|
Diard J, Bessière P, Berthoz A. Spatial Memory of Paths Using Circular Probability Distributions: Theoretical Properties, Navigation Strategies and Orientation Cue Combination. SPATIAL COGNITION AND COMPUTATION 2013. [DOI: 10.1080/13875868.2012.756490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
47
|
Monaghan P, White L, Merkx MM. Disambiguating durational cues for speech segmentation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:EL45-EL51. [PMID: 23862905 DOI: 10.1121/1.4809775] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Vowels are lengthened in lexically stressed syllables and also in word-final syllables. Both stress and final-syllable lengthening can assist in word segmentation from continuous speech, but in languages like English, with a preponderance of stress-initial words, lengthening cues may conflict for indicating word boundaries. An analysis of a large corpus of English speech demonstrated that speakers provide distributional information sufficient to potentially allow listeners to determine whether vowel lengthening is associated with lexical stress or word finality without relying on a congruence of multiple suprasegmental cues to make the distinction.
Collapse
Affiliation(s)
- Padraic Monaghan
- Centre for Research in Human Development and Learning, Department of Psychology, Lancaster University, Lancaster LA1 4YF, United Kingdom.
| | | | | |
Collapse
|
48
|
Rognini G, Sengül A, Aspell JE, Salomon R, Bleuler H, Blanke O. Visuo-tactile integration and body ownership during self-generated action. Eur J Neurosci 2013; 37:1120-9. [PMID: 23351116 DOI: 10.1111/ejn.12128] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Revised: 12/10/2012] [Accepted: 12/13/2012] [Indexed: 02/02/2023]
Abstract
Although there is increasing knowledge about how visual and tactile cues from the hands are integrated, little is known about how self-generated hand movements affect such multisensory integration. Visuo-tactile integration often occurs under highly dynamic conditions requiring sensorimotor updating. Here, we quantified visuo-tactile integration by measuring cross-modal congruency effects (CCEs) in different bimanual hand movement conditions with the use of a robotic platform. We found that classical CCEs also occurred during bimanual self-generated hand movements, and that such movements lowered the magnitude of visuo-tactile CCEs as compared to static conditions. Visuo-tactile integration, body ownership and the sense of agency were decreased by adding a temporal visuo-motor delay between hand movements and visual feedback. These data show that visual stimuli interfere less with the perception of tactile stimuli during movement than during static conditions, especially when decoupled from predictive motor information. The results suggest that current models of visuo-tactile integration need to be extended to account for multisensory integration in dynamic conditions.
Collapse
Affiliation(s)
- G Rognini
- Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | | | | | | | | | | |
Collapse
|
49
|
Cue-integration and context effects in speech: evidence against speaking-rate normalization. Atten Percept Psychophys 2012; 74:1284-301. [PMID: 22532385 DOI: 10.3758/s13414-012-0306-z] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Listeners are able to accurately recognize speech despite variation in acoustic cues across contexts, such as different speaking rates. Previous work has suggested that listeners use rate information (indicated by vowel length; VL) to modify their use of context-dependent acoustic cues, like voice-onset time (VOT), a primary cue to voicing. We present several experiments and simulations that offer an alternative explanation: that listeners treat VL as a phonetic cue rather than as an indicator of speaking rate, and that they rely on general cue-integration principles to combine information from VOT and VL. We demonstrate that listeners use the two cues independently, that VL is used in both naturally produced and synthetic speech, and that the effects of stimulus naturalness can be explained by a cue-integration model. Together, these results suggest that listeners do not interpret VOT relative to rate information provided by VL and that the effects of speaking rate can be explained by more general cue-integration principles.
Collapse
|
50
|
Drewing K. After experience with the task humans actively optimize shape discrimination in touch by utilizing effects of exploratory movement direction. Acta Psychol (Amst) 2012; 141:295-303. [PMID: 23079190 DOI: 10.1016/j.actpsy.2012.09.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2012] [Revised: 08/08/2012] [Accepted: 09/13/2012] [Indexed: 11/18/2022] Open
Abstract
The active control of exploratory movements is an integral part of active touch. We investigated and manipulated the relationship between the haptic discrimination performance for small bumps and the direction of exploratory movements relative to the body. Shape discrimination performance varied with the direction of stimulus exploration. Experimental manipulations successfully changed the normative relation between exploratory direction and discrimination performance. If participants were rewarded for "good perceptual performance" and had the choice, they displayed clear strategic preferences for exploratory directions that yield optimal performance-but only after having extensive experience with the changed perceptual conditions. Overall, the findings suggest that participants can actively adapt their exploratory movements in order to optimize haptic discrimination performance.
Collapse
Affiliation(s)
- Knut Drewing
- Institute for Psychology, Giessen-University, Germany.
| |
Collapse
|