1
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
2
|
Negen J, Slater H, Nardini M. Sensory augmentation for a rapid motor task in a multisensory environment. Restor Neurol Neurosci 2024; 42:113-120. [PMID: 37302045 DOI: 10.3233/rnn-221279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Background Sensory substitution and augmentation systems (SSASy) seek to either replace or enhance existing sensory skills by providing a new route to access information about the world. Tests of such systems have largely been limited to untimed, unisensory tasks. Objective To test the use of a SSASy for rapid, ballistic motor actions in a multisensory environment. Methods Participants played a stripped-down version of air hockey in virtual reality with motion controls (Oculus Touch). They were trained to use a simple SASSy (novel audio cue) for the puck's location. They were tested on ability to strike an oncoming puck with the SASSy, degraded vision, or both. Results Participants coordinated vision and the SSASy to strike the target with their hand more consistently than with the best single cue alone, t(13) = 9.16, p <.001, Cohen's d = 2.448. Conclusions People can adapt flexibly to using a SSASy in tasks that require tightly timed, precise, and rapid body movements. SSASys can augment and coordinate with existing sensorimotor skills rather than being limited to replacement use cases - in particular, there is potential scope for treating moderate vision loss. These findings point to the potential for augmenting human abilities, not only for static perceptual judgments, but in rapid and demanding perceptual-motor tasks.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | | | - Marko Nardini
- Psychology Department, Durham University, Durham, UK
| |
Collapse
|
3
|
Aston S, Pattie C, Graham R, Slater H, Beierholm U, Nardini M. Newly learned shape-color associations show signatures of reliability-weighted averaging without forced fusion or a memory color effect. J Vis 2022; 22:8. [PMID: 36580296 PMCID: PMC9804025 DOI: 10.1167/jov.22.13.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Reliability-weighted averaging of multiple perceptual estimates (or cues) can improve precision. Research suggests that newly learned statistical associations can be rapidly integrated in this way for efficient decision-making. Yet, it remains unclear if the integration of newly learned statistics into decision-making can directly influence perception, rather than taking place only at the decision stage. In two experiments, we implicitly taught observers novel associations between shape and color. Observers made color matches by adjusting the color of an oval to match a simultaneously presented reference. As the color of the oval changed across trials, so did its shape according to a novel mapping of axis ratio to color. Observers showed signatures of reliability-weighted averaging-a precision improvement in both experiments and reweighting of the newly learned shape cue with changes in uncertainty in Experiment 2. To ask whether this was accompanied by perceptual effects, Experiment 1 tested for forced fusion by measuring color discrimination thresholds with and without incongruent novel cues. Experiment 2 tested for a memory color effect, observers adjusting the color of ovals with different axis ratios until they appeared gray. There was no evidence for forced fusion and the opposite of a memory color effect. Overall, our results suggest that the ability to quickly learn novel cues and integrate them with familiar cues is not immediately (within the short duration of our experiments and in the domain of color and shape) accompanied by common perceptual effects.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, UK,
| | - Cat Pattie
- Biosciences Institute, Newcastle University, Newcastle, UK,
| | - Rachael Graham
- Department of Psychology, Durham University, Durham, UK,
| | - Heather Slater
- Department of Psychology, Durham University, Durham, UK,
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK,
| |
Collapse
|
4
|
Scarfe P. Experimentally disambiguating models of sensory cue integration. J Vis 2022; 22:5. [PMID: 35019955 PMCID: PMC8762719 DOI: 10.1167/jov.22.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory cue integration is one of the primary areas in which a normative mathematical framework has been used to define the “optimal” way in which to make decisions based upon ambiguous sensory information and compare these predictions to behavior. The conclusion from such studies is that sensory cues are integrated in a statistically optimal fashion. However, numerous alternative computational frameworks exist by which sensory cues could be integrated, many of which could be described as “optimal” based on different criteria. Existing studies rarely assess the evidence relative to different candidate models, resulting in an inability to conclude that sensory cues are integrated according to the experimenter's preferred framework. The aims of the present paper are to summarize and highlight the implicit assumptions rarely acknowledged in testing models of sensory cue integration, as well as to introduce an unbiased and principled method by which to determine, for a given experimental design, the probability with which a population of observers behaving in accordance with one model of sensory integration can be distinguished from the predictions of a set of alternative models.
Collapse
Affiliation(s)
- Peter Scarfe
- Vision and Haptics Laboratory, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,
| |
Collapse
|
5
|
Negen J, Bird LA, Nardini M. An adaptive cue selection model of allocentric spatial reorientation. J Exp Psychol Hum Percept Perform 2021; 47:1409-1429. [PMID: 34766823 PMCID: PMC8582329 DOI: 10.1037/xhp0000950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
After becoming disoriented, an organism must use the local environment to reorient and recover vectors to important locations. A new theory, adaptive combination, suggests that the information from different spatial cues is combined with Bayesian efficiency during reorientation. To test this further, we modified the standard reorientation paradigm to be more amenable to Bayesian cue combination analyses while still requiring reorientation in an allocentric (i.e., world-based, not egocentric) frame. Twelve adults and 20 children at ages 5 to 7 years old were asked to recall locations in a virtual environment after a disorientation. Results were not consistent with adaptive combination. Instead, they are consistent with the use of the most useful (nearest) single landmark in isolation. We term this adaptive selection. Experiment 2 suggests that adults also use the adaptive selection method when they are not disoriented but are still required to use a local allocentric frame. This suggests that the process of recalling a location in the allocentric frame is typically guided by the single most useful landmark rather than a Bayesian combination of landmarks. These results illustrate that there can be important limits to Bayesian theories of the cognition, particularly for complex tasks such as allocentric recall. Whether studying the development of children’s spatial cognition, creating artificial intelligence with human-like capacities, or designing civic spaces, we can benefit from a strong understanding of how humans process the space around them. Here we tested a prominent theory that brings together statistical theory and psychological theory (Bayesian models of perception and memory) but found that it could not satisfactorily explain our data. Our findings suggest that when tracking the spatial relations between objects from different viewpoints, rather than efficiently combining all the available landmarks, people often fall back to the much simpler method of tracking the spatial relation to the nearest landmark.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University
| | | | | |
Collapse
|
6
|
Abstract
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants' behaviour with the predictions of alternative information processing models. This lets us see when and how-during development, and with experience-the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Science Site, Durham, DH1 3LE, UK.
| |
Collapse
|
7
|
Netzer O, Heimler B, Shur A, Behor T, Amedi A. Backward spatial perception can be augmented through a novel visual-to-auditory sensory substitution algorithm. Sci Rep 2021; 11:11944. [PMID: 34099756 PMCID: PMC8184900 DOI: 10.1038/s41598-021-88595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 02/08/2021] [Indexed: 11/23/2022] Open
Abstract
Can humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the “Topo-Speech” which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.
Collapse
Affiliation(s)
- Ophir Netzer
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel.,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Shur
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel. .,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.
| |
Collapse
|
8
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
9
|
Stewart EEM, Hübner C, Schütz AC. Stronger saccadic suppression of displacement and blanking effect in children. J Vis 2020; 20:13. [PMID: 33052408 PMCID: PMC7571331 DOI: 10.1167/jov.20.10.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 09/07/2020] [Indexed: 11/24/2022] Open
Abstract
Humans do not notice small displacements to objects that occur during saccades, termed saccadic suppression of displacement (SSD), and this effect is reduced when a blank is introduced between the pre- and postsaccadic stimulus (Bridgeman, Hendry, & Stark, 1975; Deubel, Schneider, & Bridgeman, 1996). While these effects have been studied extensively in adults, it is unclear how these phenomena are characterized in children. A potentially related mechanism, saccadic suppression of contrast sensitivity-a prerequisite to achieve a stable percept-is stronger for children (Bruno, Brambati, Perani, & Morrone, 2006). However, the evidence for how transsaccadic stimulus displacements may be suppressed or integrated is mixed. While they can integrate basic visual feature information from an early age, they cannot integrate multisensory information (Gori, Viva, Sandini, & Burr, 2008; Nardini, Jones, Bedford, & Braddick, 2008), suggesting a failure in the ability to integrate more complex sensory information. We tested children 7 to 12 years old and adults 19 to 23 years old on their ability to perceive intrasaccadic stimulus displacements, with and without a postsaccadic blank. Results showed that children had stronger SSD than adults and a larger blanking effect. Children also had larger undershoots and more variability in their initial saccade endpoints, indicating greater intrinsic uncertainty, and they were faster in executing corrective saccades to account for these errors. Together, these results suggest that children may have a greater internal expectation or prediction of saccade error than adults; thus, the stronger SSD in children may be due to higher intrinsic uncertainty in target localization or saccade execution.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Carolin Hübner
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
- Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuetz/team/alexander-schutz
| |
Collapse
|
10
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
11
|
Kvansakul J, Hamilton L, Ayton LN, McCarthy C, Petoe MA. Sensory augmentation to aid training with retinal prostheses. J Neural Eng 2020; 17:045001. [PMID: 32554868 DOI: 10.1088/1741-2552/ab9e1d] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Retinal prosthesis recipients require rehabilitative training to learn the non-intuitive nature of prosthetic 'phosphene vision'. This study investigated whether the addition of auditory cues, using The vOICe sensory substitution device (SSD), could improve functional performance with simulated phosphene vision. APPROACH Forty normally sighted subjects completed two visual tasks under three conditions. The phosphene condition converted the image to simulated phosphenes displayed on a virtual reality headset. The SSD condition provided auditory information via stereo headphones, translating the image into sound. Horizontal information was encoded as stereo timing differences between ears, vertical information as pitch, and pixel intensity as audio intensity. The third condition combined phosphenes and SSD. Tasks comprised light localisation from the Basic Assessment of Light and Motion (BaLM) and the Tumbling-E from the Freiburg Acuity and Contrast Test (FrACT). To examine learning effects, twenty of the forty subjects received SSD training prior to assessment. MAIN RESULTS Combining phosphenes with auditory SSD provided better light localisation accuracy than either phosphenes or SSD alone, suggesting a compound benefit of integrating modalities. Although response times for SSD-only were significantly longer than all other conditions, combined condition response times were as fast as phosphene-only, highlighting that audio-visual integration provided both response time and accuracy benefits. Prior SSD training provided a benefit to localisation accuracy and speed in SSD-only (as expected) and Combined conditions compared to untrained SSD-only. Integration of the two modalities did not improve spatial resolution task performance, with resolution limited to that of the higher resolution modality (SSD). SIGNIFICANCE Combining phosphene (visual) and SSD (auditory) modalities was effective even without SSD training and led to an improvement in light localisation accuracy and response times. Spatial resolution performance was dominated by auditory SSD. The results suggest there may be a benefit to including auditory cues when training vision prosthesis recipients.
Collapse
Affiliation(s)
- Jessica Kvansakul
- Bionics Institute, East Melbourne, VIC, Australia. Department of Medical Bionics, University of Melbourne, Parkville, VIC, Australia
| | | | | | | | | |
Collapse
|
12
|
Negen J, Chere B, Bird LA, Taylor E, Roome HE, Keenaghan S, Thaler L, Nardini M. Sensory cue combination in children under 10 years of age. Cognition 2019; 193:104014. [DOI: 10.1016/j.cognition.2019.104014] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 06/20/2019] [Accepted: 06/22/2019] [Indexed: 10/26/2022]
|
13
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|