1
|
Sound-induced flash illusion is modulated by the depth of auditory stimuli: Evidence from younger and older adults. Atten Percept Psychophys 2022; 84:2040-2050. [DOI: 10.3758/s13414-022-02537-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/08/2022]
|
2
|
Peripersonal Space from a multisensory perspective: the distinct effect of the visual and tactile components of Visuo-Tactile stimuli. Exp Brain Res 2022; 240:1205-1217. [PMID: 35178603 PMCID: PMC9015983 DOI: 10.1007/s00221-022-06324-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 02/05/2022] [Indexed: 11/21/2022]
Abstract
Peripersonal Space (PPS) is defined as the space close to the body where all interactions between the individual and the environment take place. Behavioural experiments on PPS exploit multisensory integration, using Multisensory Visuo-Tactile stimuli (MVT), whose visual and tactile components target the same body part (i.e. the face, the hand, the foot). However, the effects of visual and tactile stimuli targeting different body parts on PPS representation are unknown, and the relationship with the RTs for Tactile-Only stimuli is unclear. In this study, we addressed two research questions: (1) if the MVT-RTs are independent of Tactile-Only-RTs and if the latter is influenced by time-dependency effects, and (2) if PPS estimations derived from MVT-RTs depend on the location of the Visual or Tactile component of MVTs. We studied 40 right-handed participants, manipulating the body location (right hand, cheek or foot) and the distance of administration. Visual and Tactile components targeted different or the same body parts and were delivered respectively at five distances. RTs to Tactile-Only trials showed a non-monotonic trend, depending on the delay of stimulus administration. Moreover, RTs to Multisensory Visuo-Tactile trials were found to be dependent on the Distance and location of the Visual component of the stimulus. In conclusion, our results show that Tactile-Only RTs should be removed from Visuo-Tactile RTs and that the Visual and Tactile components of Visuo-Tactile stimuli do not necessarily have to target the same body part. These results have a relevant impact on the study of PPS representations, providing new important methodological information.
Collapse
|
3
|
Plewan T, Rinkenauer G. Visual search in virtual 3D space: the relation of multiple targets and distractors. PSYCHOLOGICAL RESEARCH 2021; 85:2151-2162. [PMID: 33388993 PMCID: PMC8357743 DOI: 10.1007/s00426-020-01392-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 07/13/2020] [Indexed: 11/16/2022]
Abstract
Visual search and attentional alignment in 3D space are potentially modulated by information in unattended depth planes. The number of relevant and irrelevant items as well as their spatial relations may be regarded as factors which contribute to such effects. On a behavioral level, it might be different whether multiple distractors are presented in front of or behind target items. However, several studies revealed that attention cannot be restricted to a single depth plane. To further investigate this issue, two experiments were conducted. In the first experiment, participants searched for (multiple) targets in one depth plane, while non-target items (distractors) were simultaneously presented in this or another depth plane. In the second experiment, an additional spatial cue was presented with different validities to highlight the target position. Search durations were generally shorter when the search array contained two additional targets and were markedly longer when three distractors were displayed. The latter effect was most pronounced when a single target and three distractors coincided in the same depth plane and this effect persisted even when the target position was validly cued. The study reveals that the depth relation of target and distractor stimuli was more important than the absolute distance between these objects. Furthermore, the present findings suggest that within an attended depth plane, irrelevant information elicits strong interference. In sum, this study provides further evidence that allocation of attention is a flexible process which may be modulated by a variety of perceptual and cognitive factors.
Collapse
Affiliation(s)
- Thorsten Plewan
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany.
- Psychology School, Hochschule Fresenius - University of Applied Sciences Düsseldorf, Düsseldorf, Germany.
| | - Gerhard Rinkenauer
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany
| |
Collapse
|
4
|
Eberhardt LV, Pittino F, Huckauf A. Close - but not distant - conditioned flanker emotion affects crowding. J Vis 2021; 21:22. [PMID: 34424274 PMCID: PMC8383907 DOI: 10.1167/jov.21.8.22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Crowding is affected by conditioned stimulus emotion. This effect is clearly observed for conditioned flankers, but only marginally pronounced for conditioned targets. Studies on the processing of emotional stimuli suggest that the magnitude of the emotional effect depends on the presentation depth in that effects of emotion increase with decreasing distance to the observer in depth. Based on respective findings, we investigate crowding with stimuli of conditioned negative and neutral emotion across real depth; that is, stimuli were either presented closer, at or farther away than the fixation depth. Conditioned emotion of flankers affected crowding when flankers were presented closer than or at fixation depth, which is also the distance the target was presented at. Farther away than the fixation depth, flanker emotion did not alter crowding (Experiment 1a). Conditioned target emotion, however, did only show weak effects on crowding; neither when flankers (Experiment 1b) nor when targets were varied in depth (Experiment 2) there was a clear effect of target emotion, replicating findings in two-dimensional settings. Taken together, the results suggest that flanker's emotional associations can become important for crowding, although, it depends on the special processing characteristics of stimulus emotion in depth. The conditioned emotion of targets scarcely affected crowding.
Collapse
Affiliation(s)
| | - Ferdinand Pittino
- General Psychology, Ulm University, Albert-Einstein-Allee 47, Ulm, Germany.,
| | - Anke Huckauf
- General Psychology, Ulm University, Albert-Einstein-Allee 47, Ulm, Germany.,
| |
Collapse
|
5
|
Allocation of attention in 3D space is adaptively modulated by relative position of target and distractor stimuli. Atten Percept Psychophys 2020; 82:1063-1073. [PMID: 31773511 DOI: 10.3758/s13414-019-01878-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Allocation of attention across different depth planes is a prerequisite for visual selection in a three-dimensional environment. Previous research showed that participants successfully used stereoscopic depth information to focus their attention. This, however, does not mean that salient information from other depth planes is completely neglected. The present study investigated the question of whether competing visual information is differentially processed when displayed in a single depth plane or across two different depth planes. Moreover, it was of interest whether potential effects were further modulated by the items' relative spatial position (near or far). In three experiments participants performed a variant of the additional singleton paradigm. Targets were defined by stereoscopic depth information and as such appeared either in a near or far depth plane. Distractor stimuli were displayed in the same or in the opposed depth plane. The results consistently showed that visual selection was slower when target and distractor coincided within the same depth plane. There was no general advantage for targets presented in near or far depth planes. However, differential effects of target depth plane and the target-distractor relation were observed. Selection of near targets was more affected by distractors within the same depth plane while far targets were identified more slowly when the amount of information in closer depth planes increased. While attentional resources could not be exclusively centered to a distinct depth plane, the allocation of attention might be organized along an egocentric gradient through space and varies with the organization of the visual surrounding.
Collapse
|
6
|
Auditory stimuli degrade visual performance in virtual reality. Sci Rep 2020; 10:12363. [PMID: 32703981 PMCID: PMC7378072 DOI: 10.1038/s41598-020-69135-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/07/2020] [Indexed: 12/01/2022] Open
Abstract
We report an auditory effect of visual performance degradation in a virtual reality (VR) setting, where the viewing conditions are significantly different from previous studies. With the presentation of temporally congruent but spatially incongruent sound, we can degrade visual performance significantly at detection and recognition levels. We further show that this effect is robust to different types and locations of both auditory and visual stimuli. We also analyze participants behavior with an eye tracker to study the underlying cause of the degradation effect. We find that the performance degradation occurs even in the absence of saccades towards the sound source, during normal gaze behavior. This suggests that this effect is not caused by oculomotor phenomena, but rather by neural interactions or attentional shifts.
Collapse
|
7
|
Berger M, Agha NS, Gail A. Wireless recording from unrestrained monkeys reveals motor goal encoding beyond immediate reach in frontoparietal cortex. eLife 2020; 9:e51322. [PMID: 32364495 PMCID: PMC7228770 DOI: 10.7554/elife.51322] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 05/02/2020] [Indexed: 11/25/2022] Open
Abstract
System neuroscience of motor cognition regarding the space beyond immediate reach mandates free, yet experimentally controlled movements. We present an experimental environment (Reach Cage) and a versatile visuo-haptic interaction system (MaCaQuE) for investigating goal-directed whole-body movements of unrestrained monkeys. Two rhesus monkeys conducted instructed walk-and-reach movements towards targets flexibly positioned in the cage. We tracked 3D multi-joint arm and head movements using markerless motion capture. Movements show small trial-to-trial variability despite being unrestrained. We wirelessly recorded 192 broad-band neural signals from three cortical sensorimotor areas simultaneously. Single unit activity is selective for different reach and walk-and-reach movements. Walk-and-reach targets could be decoded from premotor and parietal but not motor cortical activity during movement planning. The Reach Cage allows systems-level sensorimotor neuroscience studies with full-body movements in a configurable 3D spatial setting with unrestrained monkeys. We conclude that the primate frontoparietal network encodes reach goals beyond immediate reach during movement planning.
Collapse
Affiliation(s)
- Michael Berger
- Cognitive Neuroscience Laboratory, German Primate Center – Leibniz-Institute for Primate ResearchGoettingenGermany
- Faculty of Biology and Psychology, University of GoettingenGoettingenGermany
| | - Naubahar Shahryar Agha
- Cognitive Neuroscience Laboratory, German Primate Center – Leibniz-Institute for Primate ResearchGoettingenGermany
| | - Alexander Gail
- Cognitive Neuroscience Laboratory, German Primate Center – Leibniz-Institute for Primate ResearchGoettingenGermany
- Faculty of Biology and Psychology, University of GoettingenGoettingenGermany
- Leibniz-ScienceCampus Primate CognitionGoettingenGermany
- Bernstein Center for Computational NeuroscienceGoettingenGermany
| |
Collapse
|
8
|
Spatially incongruent sounds affect visual localization in virtual environments. Atten Percept Psychophys 2020; 82:2067-2075. [PMID: 31900858 DOI: 10.3758/s13414-019-01929-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Distance underestimations along the depth plane are widely found in virtual environments. However, past findings have shown that changes in the visual aspects of virtual reality settings do not lead to more accurate depth estimates. Therefore, we examined if nonvisual stimuli, namely, sounds, could serve as cues that affect observers' depth perception. Accordingly, we conducted two distance discrimination tasks to examine whether observers' depth localization is affected by a spatially incongruent sound. In Experiment 1, a spatially incongruent sound made a visual target appear farther away than a visual target presented with no sound only when a far-distance range (i.e., longer than 12 m) was introduced. Experiment 2 further indicated that the sound shifted visual localization only when audiovisual spatial disparity did not exceed 4°. Taken together, our findings suggest that the depth localization of a visual object in virtual reality can be altered by a spatially incongruent sound, and provide a potential approach that we can adopt a spatially incongruent sound as a cue to reduce the depth compression in VR.
Collapse
|
9
|
Hutmacher F. Why Is There So Much More Research on Vision Than on Any Other Sensory Modality? Front Psychol 2019; 10:2246. [PMID: 31636589 PMCID: PMC6787282 DOI: 10.3389/fpsyg.2019.02246] [Citation(s) in RCA: 84] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 09/19/2019] [Indexed: 01/10/2023] Open
Abstract
Why is there so much more research on vision than on any other sensory modality? There is a seemingly easy answer to this question: It is because vision is our most important and most complex sense. Although there are arguments in favor of this explanation, it can be challenged in two ways: by showing that the arguments regarding the importance and complexity of vision are debatable and by demonstrating that there are other aspects that need to be taken into account. Here, I argue that the explanation is debatable, as there are various ways of defining “importance” and “complexity” and, as there is no clear consensus that vision is indeed the most important and most complex of our senses. Hence, I propose two additional explanations: According to the methodological-structural explanation, there is more research on vision because the available, present-day technology is better suited for studying vision than for studying other modalities – an advantage which most likely is the result of an initial bias toward vision, which reinforces itself. Possible reasons for such an initial bias are discussed. The cultural explanation emphasizes that the dominance of the visual is not an unchangeable constant, but rather the result of the way our societies are designed and thus heavily influenced by human decision-making. As it turns out, there is no universal hierarchy of the senses, but great historical and cross-cultural variation. Realizing that the dominance of the visual is socially and culturally reinforced and not simply a law of nature, gives us the opportunity to take a step back and to think about the kind of sensory environments we want to create and about the kinds of theories that need to be developed in research.
Collapse
Affiliation(s)
- Fabian Hutmacher
- Department of Psychology, University of Regensburg, Regensburg, Germany
| |
Collapse
|
10
|
Noel JP, Serino A, Wallace MT. Increased Neural Strength and Reliability to Audiovisual Stimuli at the Boundary of Peripersonal Space. J Cogn Neurosci 2019; 31:1155-1172. [DOI: 10.1162/jocn_a_01334] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.
Collapse
Affiliation(s)
| | - Andrea Serino
- University of Lausanne
- Ecole Polytechnique Federale de Lausanne
| | | |
Collapse
|
11
|
Patané I, Cardinali L, Salemme R, Pavani F, Farnè A, Brozzoli C. Action Planning Modulates Peripersonal Space. J Cogn Neurosci 2019; 31:1141-1154. [DOI: 10.1162/jocn_a_01349] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Peripersonal space is a multisensory representation relying on the processing of tactile and visual stimuli presented on and close to different body parts. The most studied peripersonal space representation is perihand space (PHS), a highly plastic representation modulated following tool use and by the rapid approach of visual objects. Given these properties, PHS may serve different sensorimotor functions, including guidance of voluntary actions such as object grasping. Strong support for this hypothesis would derive from evidence that PHS plastic changes occur before the upcoming movement rather than after its initiation, yet to date, such evidence is scant. Here, we tested whether action-dependent modulation of PHS, behaviorally assessed via visuotactile perception, may occur before an overt movement as early as the action planning phase. To do so, we probed tactile and visuotactile perception at different time points before and during the grasping action. Results showed that visuotactile perception was more strongly affected during the planning phase (250 msec after vision of the target) than during a similarly static but earlier phase (50 msec after vision of the target). Visuotactile interaction was also enhanced at the onset of hand movement, and it further increased during subsequent phases of hand movement. Such a visuotactile interaction featured interference effects during all phases from action planning onward as well as a facilitation effect at the movement onset. These findings reveal that planning to grab an object strengthens the multisensory interaction of visual information from the target and somatosensory information from the hand. Such early updating of the visuotactile interaction reflects multisensory processes supporting motor planning of actions.
Collapse
Affiliation(s)
- Ivan Patané
- INSERM U1028, CNRS U5292, Lyon, France
- University of Bologna
- University of Lyon 1
- Hospices Civils de Lyon
| | | | - Romeo Salemme
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
| | | | - Alessandro Farnè
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
- University of Trento
| | - Claudio Brozzoli
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
- Karolinska Institutet
| |
Collapse
|
12
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
13
|
Bufacchi RJ, Iannetti GD. An Action Field Theory of Peripersonal Space. Trends Cogn Sci 2018; 22:1076-1090. [PMID: 30337061 PMCID: PMC6237614 DOI: 10.1016/j.tics.2018.09.004] [Citation(s) in RCA: 113] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 09/17/2018] [Accepted: 09/18/2018] [Indexed: 11/16/2022]
Abstract
Predominant conceptual frameworks often describe peripersonal space (PPS) as a single, distance-based, in-or-out zone within which stimuli elicit enhanced neural and behavioural responses. Here we argue that this intuitive framework is contradicted by neurophysiological and behavioural data. First, PPS-related measures are not binary, but graded with proximity. Second, they are strongly influenced by factors other than proximity, such as walking, tool use, stimulus valence, and social cues. Third, many different PPS-related responses exist, and each can be used to describe a different space. Here, we reconceptualise PPS as a set of graded fields describing behavioural relevance of actions aiming to create or avoid contact between objects and the body. This reconceptualisation incorporates PPS into mainstream theories of action selection and behaviour.
Collapse
Affiliation(s)
- Rory J Bufacchi
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX), University College London, London, UK
| | - Gian Domenico Iannetti
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX), University College London, London, UK; Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy.
| |
Collapse
|
14
|
Blini E, Desoche C, Salemme R, Kabil A, Hadj-Bouziane F, Farnè A. Mind the Depth: Visual Perception of Shapes Is Better in Peripersonal Space. Psychol Sci 2018; 29:1868-1877. [PMID: 30285541 PMCID: PMC6238160 DOI: 10.1177/0956797618795679] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Closer objects are invariably perceived as bigger than farther ones and are therefore
easier to detect and discriminate. This is so deeply grounded in our daily experience that
no question has been raised as to whether the advantage for near objects depends on other
features (e.g., depth itself). In a series of five experiments (N = 114),
we exploited immersive virtual environments and visual illusions (i.e., Ponzo) to probe
humans’ perceptual abilities in depth and, specifically, in the space closely surrounding
our body, termed peripersonal space. We reversed the natural distance scaling of size in
favor of the farther object, which thus appeared bigger, to demonstrate a persistent
shape-discrimination advantage for close objects. Psychophysical modeling further
suggested a sigmoidal trend for this benefit, mirroring that found for multisensory
estimates of peripersonal space. We argue that depth is a fundamental, yet overlooked,
dimension of human perception and that future studies in vision and perception should be
depth aware.
Collapse
Affiliation(s)
- Elvio Blini
- 1 Integrative Multisensory Perception Action & Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,2 University of Lyon 1
| | - Clément Desoche
- 3 Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France
| | - Romeo Salemme
- 1 Integrative Multisensory Perception Action & Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,3 Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France
| | - Alexandre Kabil
- 3 Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France
| | - Fadila Hadj-Bouziane
- 1 Integrative Multisensory Perception Action & Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,2 University of Lyon 1
| | - Alessandro Farnè
- 1 Integrative Multisensory Perception Action & Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,2 University of Lyon 1.,3 Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France
| |
Collapse
|
15
|
Abstract
The construction of a coherent representation of our body and the mapping of the space immediately surrounding it are of the highest ecological importance. This space has at least three specificities: it is a space where actions are planned in order to interact with our environment; it is a space that contributes to the experience of self and self-boundaries, through tactile processing and multisensory interactions; last, it is a space that contributes to the experience of body integrity against external events. In the last decades, numerous studies have been interested in peripersonal space (PPS), defined as the space directly surrounding us and which we can interact with (for reviews, see Cléry et al., 2015b; de Vignemont and Iannetti, 2015; di Pellegrino and Làdavas, 2015). These studies have contributed to the understanding of how this space is constructed, encoded and modulated. The majority of these studies focused on subparts of PPS (the hand, the face or the trunk) and very few of them investigated the interaction between PPS subparts. In the present review, we summarize the latest advances in this research and we discuss the new perspectives that are set forth for futures investigations on this topic. We describe the most recent methods used to estimate PPS boundaries by the means of dynamic stimuli. We then highlight how impact prediction and approaching stimuli modulate this space by social, emotional and action-related components involving principally a parieto-frontal network. In a next step, we review evidence that there is not a unique representation of PPS but at least three sub-sections (hand, face and trunk PPS). Last, we discuss how these subspaces interact, and we question whether and how bodily self-consciousness (BSC) is functionally and behaviorally linked to PPS.
Collapse
Affiliation(s)
- Justine Cléry
- UMR5229, Institut des Sciences Cognitives Marc Jeannerod, CNRS-Université Claude Bernard Lyon I, Bron, France
| | - Suliann Ben Hamed
- UMR5229, Institut des Sciences Cognitives Marc Jeannerod, CNRS-Université Claude Bernard Lyon I, Bron, France
| |
Collapse
|
16
|
Yamasaki D, Miyoshi K, Altmann CF, Ashida H. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object. Perception 2018; 47:751-771. [PMID: 29783921 DOI: 10.1177/0301006618777708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Collapse
Affiliation(s)
| | | | - Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Japan
| | | |
Collapse
|
17
|
Audio-visual sensory deprivation degrades visuo-tactile peri-personal space. Conscious Cogn 2018; 61:61-75. [DOI: 10.1016/j.concog.2018.04.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 03/15/2018] [Accepted: 04/02/2018] [Indexed: 11/24/2022]
|
18
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
19
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
20
|
Bufacchi RJ. Approaching threatening stimuli cause an expansion of defensive peripersonal space. J Neurophysiol 2017; 118:1927-1930. [PMID: 28539400 DOI: 10.1152/jn.00316.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 05/24/2017] [Accepted: 05/24/2017] [Indexed: 11/22/2022] Open
Abstract
When sudden environmental stimuli signaling threat occur in the portion of space surrounding the body (defensive peripersonal space), defensive responses are enhanced. Recently Bisio et al. (Bisio A, Garbarini F, Biggio M, Fossataro C, Ruggeri P, Bove M. J Neurosci 37: 2415-2424, 2017) showed that a marker of defensive peripersonal space, the defensive hand-blink reflex, is modulated by the motion of the eliciting threatening stimulus. These results can be parsimoniously explained by the continuous monitoring of environmental threats, resulting in an expansion of defensive peripersonal space when threatening stimuli approach.
Collapse
Affiliation(s)
- R J Bufacchi
- Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), London, United Kingdom; and .,Centre for Mathematics and Physics in the Life Sciences and EXperimental Biology (CoMPLEX), University College London, London, United Kingdom
| |
Collapse
|