1
|
Cary E, Lahdesmaki I, Badde S. Audiovisual simultaneity windows reflect temporal sensory uncertainty. Psychon Bull Rev 2024:10.3758/s13423-024-02478-4. [PMID: 38388825 DOI: 10.3758/s13423-024-02478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
The ability to judge the temporal alignment of visual and auditory information is a prerequisite for multisensory integration and segregation. However, each temporal measurement is subject to error. Thus, when judging whether a visual and auditory stimulus were presented simultaneously, observers must rely on a subjective decision boundary to distinguish between measurement error and truly misaligned audiovisual signals. Here, we tested whether these decision boundaries are relaxed with increasing temporal sensory uncertainty, i.e., whether participants make the same type of adjustment an ideal observer would make. Participants judged the simultaneity of audiovisual stimulus pairs with varying temporal offset, while being immersed in different virtual environments. To obtain estimates of participants' temporal sensory uncertainty and simultaneity criteria in each environment, an independent-channels model was fitted to their simultaneity judgments. In two experiments, participants' simultaneity decision boundaries were predicted by their temporal uncertainty, which varied unsystematically with the environment. Hence, observers used a flexibly updated estimate of their own audiovisual temporal uncertainty to establish subjective criteria of simultaneity. This finding implies that, under typical circumstances, audiovisual simultaneity windows reflect an observer's cross-modal temporal uncertainty.
Collapse
Affiliation(s)
- Emma Cary
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Ilona Lahdesmaki
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, 02155, USA.
| |
Collapse
|
2
|
Badde S, Landy MS, Adams WJ. Multisensory causal inference is feature-specific, not object-based. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220345. [PMID: 37545302 PMCID: PMC10404918 DOI: 10.1098/rstb.2022.0345] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration depends on causal inference about the sensory signals. We tested whether implicit causal-inference judgements pertain to entire objects or focus on task-relevant object features. Participants in our study judged virtual visual, haptic and visual-haptic surfaces with respect to two features-slant and roughness-against an internal standard in a two-alternative forced-choice task. Modelling of participants' responses revealed that the degree to which their perceptual judgements were based on integrated visual-haptic information varied unsystematically across features. For example, a perceived mismatch between visual and haptic roughness would not deter the observer from integrating visual and haptic slant. These results indicate that participants based their perceptual judgements on a feature-specific selection of information, suggesting that multisensory causal inference proceeds not at the object level but at the level of single object features. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology, Tufts University, 490 Boston Avenue, Medford, MA 02155, USA
| | - Michael S. Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY 10003, USA
| | - Wendy J. Adams
- Department of Psychology, University of Southampton, 44 Highfield Campus, Southampton SO17 1BJ, UK
| |
Collapse
|
3
|
Montoya S, Badde S. Only visible flicker helps flutter: Tactile-visual integration breaks in the absence of visual awareness. Cognition 2023; 238:105528. [PMID: 37354787 DOI: 10.1016/j.cognition.2023.105528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 06/15/2023] [Accepted: 06/16/2023] [Indexed: 06/26/2023]
Abstract
Combining information from multiple senses enhances our perception of the world. Whether we need to be aware of all stimuli to benefit from multisensory integration, however, is still under investigation. Here, we tested whether tactile frequency perception benefits from the presence of congruent visual flicker even if the flicker is so rapid that it is perceptually fused into a steady light and therefore invisible. Our participants completed a tactile frequency discrimination task given either unisensory tactile or congruent tactile-visual stimulation. Tactile and tactile-visual test frequencies ranged from far below to far above participants' flicker fusion threshold (determined separately). For frequencies distinctively below their flicker fusion threshold, participants performed significantly better given tactile-visual stimulation than when presented with only tactile stimuli. Yet, for frequencies above their flicker fusion threshold, participants' tactile frequency perception did not profit from the presence of congruent but likely fused and thus invisible visual flicker. The results matched the predictions of an ideal-observer model in which tactile-visual integration is conditional on awareness of both stimuli. In contrast, it was impossible to reproduce the observed results with a model that assumed tactile-visual integration proceeds irrespective of stimulus awareness. In sum, we revealed that the benefits of congruent visual stimulation for tactile flutter frequency perception depend on the visibility of the visual flicker, suggesting that multisensory integration requires awareness.
Collapse
Affiliation(s)
- Sofia Montoya
- Department of Psychology, Tufts University, 490 Boston Avenue, 02155 Medford, MA, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, 490 Boston Avenue, 02155 Medford, MA, USA.
| |
Collapse
|
4
|
Badde S, Heed T. The hands' default location guides tactile spatial selectivity. Proc Natl Acad Sci U S A 2023; 120:e2209680120. [PMID: 37014855 PMCID: PMC10104573 DOI: 10.1073/pnas.2209680120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023] Open
Abstract
Our skin is a two-dimensional sheet that can be folded into a multitude of configurations due to the mobility of our body parts. Parts of the human tactile system might account for this flexibility by being tuned to locations in the world rather than on the skin. Using adaptation, we scrutinized the spatial selectivity of two tactile perceptual mechanisms for which the visual equivalents have been reported to be selective in world coordinates: tactile motion and the duration of tactile events. Participants' hand position-uncrossed or crossed-as well as the stimulated hand varied independently across adaptation and test phases. This design distinguished among somatotopic selectivity for locations on the skin and spatiotopic selectivity for locations in the environment, but also tested spatial selectivity that fits neither of these classical reference frames and is based on the default position of the hands. For both features, adaptation consistently affected subsequent tactile perception at the adapted hand, reflecting skin-bound spatial selectivity. Yet, tactile motion and temporal adaptation also transferred across hands but only if the hands were crossed during the adaptation phase, that is, when one hand was placed at the other hand's typical location. Thus, selectivity for locations in the world was based on default rather than online sensory information about the location of the hands. These results challenge the prevalent dichotomy of somatotopic and spatiotopic selectivity and suggest that prior information about the hands' default position -right hand at the right side-is embedded deep in the tactile sensory system.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA 02155
| | - Tobias Heed
- Cognitive Psychology, Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
- Centre of Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria
| |
Collapse
|
5
|
Prinz S, Murray JM, Strack C, Nattenmüller J, Pomykala KL, Schlemmer HP, Badde S, Kleesiek J. Novel measures for the diagnosis of hepatic steatosis using contrast-enhanced computer tomography images. Eur J Radiol 2023; 160:110708. [PMID: 36724687 DOI: 10.1016/j.ejrad.2023.110708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 12/23/2022] [Accepted: 01/17/2023] [Indexed: 01/22/2023]
Abstract
PURPOSE Hepatic steatosis is often diagnosed non-invasively. Various measures and accompanying diagnostic thresholds based on contrast-enhanced CT and virtual non-contrast images have been proposed. We compare these established criteria to novel and fully automated measures. METHOD CT data sets of 197 patients were analyzed. Regions of interest (ROIs) were manually drawn for the liver, spleen, portal vein, and aorta to calculate four established measures of liver-fat. Two novel measures capturing the deviation between the empirical distributions of HU measurements across all voxels within the liver and spleen were calculated. These measures were calculated with both manual ROIs and using fully automated organ segmentations. Agreement between the different measures was evaluated using correlational analysis, as well as their ability to discriminate between fatty and healthy liver. RESULTS Established and novel measures of fatty liver were at a high level of agreement. Novel methods were statistically indistinguishable from the established ones when taking established diagnostic thresholds or physicians' diagnoses as ground truth and this high performance level persisted for automatically selected ROIs. CONCLUSION Automatically generated organ segmentations led to comparable results as manual ROIs, suggesting that the implementation of automated methods can prove to be a valuable tool for incidental diagnosis. Differences in the distribution of HU measurements across voxels between liver and spleen can serve as surrogate markers for the liver-fat-content. Novel measures do not exhibit a measurable disadvantage over established methods based on simpler measures such as across-voxel averages in a population with low incidence of fatty liver.
Collapse
Affiliation(s)
- Sebastian Prinz
- Division of Radiology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany; Medical Faculty Heidelberg, Heidelberg University, 69120 Heidelberg, Germany.
| | - Jacob M Murray
- Division of Radiology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany; Medical Faculty Heidelberg, Heidelberg University, 69120 Heidelberg, Germany; Institute for AI in Medicine (IKIM), University Medicine Essen, 45131 Essen, Germany
| | - Christian Strack
- Division of Radiology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany; Medical Faculty Heidelberg, Heidelberg University, 69120 Heidelberg, Germany
| | - Johanna Nattenmüller
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany; Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79106 Freiburg, Germany
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, 45131 Essen, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
| | - Stephanie Badde
- Department of Psychology, Tufts University, 02511 Medford, MA, USA
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, 45131 Essen, Germany; German Cancer Consortium (DKTK), Partner Sites Heidelberg and Essen, 69120 Heidelberg, Germany; Cancer Research Center Cologne Essen, West German Cancer Center Essen, 45122 Essen, Germany
| |
Collapse
|
6
|
Montoya S, Badde S. Flicker helps flutter: visual-tactile integration benefits tactile frequency perception even in the absence of visual awareness. J Vis 2022. [DOI: 10.1167/jov.22.14.3682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
7
|
Hong F, Xu J, Kalia M, Badde S, Landy M. Audiovisual integration across space and time. J Vis 2022. [DOI: 10.1167/jov.22.14.4438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Affiliation(s)
| | - Jiaming Xu
- Department of Psychology, New York university
| | - Megha Kalia
- Department of Psychology, New York university
| | | | - Michael Landy
- Department of Psychology, New York university
- Center for Neural Science, New York University
| |
Collapse
|
8
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
9
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
10
|
Hong F, Badde S, Landy M. Exposure to congruent or incongruent audiovisual stimuli modulates observers’ prior about a common cause for vision and audition. J Vis 2021. [DOI: 10.1167/jov.21.9.2761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | | | - Michael Landy
- Department of Psychology, New York University
- Center for Neural Science, New York University
| |
Collapse
|
11
|
Schellekens W, Thio M, Badde S, Winawer J, Ramsey N, Petridou N. A touch of hierarchy: population receptive fields reveal fingertip integration in Brodmann areas in human primary somatosensory cortex. Brain Struct Funct 2021; 226:2099-2112. [PMID: 34091731 PMCID: PMC8354965 DOI: 10.1007/s00429-021-02309-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 05/26/2021] [Indexed: 12/03/2022]
Abstract
Several neuroimaging studies have shown the somatotopy of body part representations in primary somatosensory cortex (S1), but the functional hierarchy of distinct subregions in human S1 has not been adequately addressed. The current study investigates the functional hierarchy of cyto-architectonically distinct regions, Brodmann areas BA3, BA1, and BA2, in human S1. During functional MRI experiments, we presented participants with vibrotactile stimulation of the fingertips at three different vibration frequencies. Using population Receptive Field (pRF) modeling of the fMRI BOLD activity, we identified the hand region in S1 and the somatotopy of the fingertips. For each voxel, the pRF center indicates the finger that most effectively drives the BOLD signal, and the pRF size measures the spatial somatic pooling of fingertips. We find a systematic relationship of pRF sizes from lower-order areas to higher-order areas. Specifically, we found that pRF sizes are smallest in BA3, increase slightly towards BA1, and are largest in BA2, paralleling the increase in visual receptive field size as one ascends the visual hierarchy. Additionally, we find that the time-to-peak of the hemodynamic response in BA3 is roughly 0.5 s earlier compared to BA1 and BA2, further supporting the notion of a functional hierarchy of subregions in S1. These results were obtained during stimulation of different mechanoreceptors, suggesting that different afferent fibers leading up to S1 feed into the same cortical hierarchy.
Collapse
Affiliation(s)
- W Schellekens
- Department of Radiology, Center for Image Sciences, UMC Utrecht, Q101.132, P.O.Box 85500, 3508 GA, Utrecht, The Netherlands.
| | - M Thio
- Department of Radiology, Center for Image Sciences, UMC Utrecht, Q101.132, P.O.Box 85500, 3508 GA, Utrecht, The Netherlands
| | - S Badde
- Department of Psychology and Center of Neural Science, NYU, New York, USA
| | - J Winawer
- Department of Psychology and Center of Neural Science, NYU, New York, USA
| | - N Ramsey
- Department of Neurology and Neurosurgery, UMC Utrecht, Utrecht, The Netherlands
| | - N Petridou
- Department of Radiology, Center for Image Sciences, UMC Utrecht, Q101.132, P.O.Box 85500, 3508 GA, Utrecht, The Netherlands
| |
Collapse
|
12
|
Affiliation(s)
| | | | - Michael Landy
- Department of Psychology, New York university
- Center for Neural Science, New York university
| |
Collapse
|
13
|
Badde S, Ley P, Rajendran SS, Shareef I, Kekunnaya R, Röder B. Sensory experience during early sensitive periods shapes cross-modal temporal biases. eLife 2020; 9:61238. [PMID: 32840213 PMCID: PMC7476755 DOI: 10.7554/elife.61238] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 08/18/2020] [Indexed: 11/13/2022] Open
Abstract
Typical human perception features stable biases such as perceiving visual events as later than synchronous auditory events. The origin of such perceptual biases is unknown. To investigate the role of early sensory experience, we tested whether a congenital, transient loss of pattern vision, caused by bilateral dense cataracts, has sustained effects on audio-visual and tactile-visual temporal biases and resolution. Participants judged the temporal order of successively presented, spatially separated events within and across modalities. Individuals with reversed congenital cataracts showed a bias towards perceiving visual stimuli as occurring earlier than auditory (Expt. 1) and tactile (Expt. 2) stimuli. This finding stood in stark contrast to normally sighted controls and sight-recovery individuals who had developed cataracts later in childhood: both groups exhibited the typical bias of perceiving vision as delayed compared to audition. These findings provide strong evidence that cross-modal temporal biases depend on sensory experience during an early sensitive period.
Collapse
Affiliation(s)
- Stephanie Badde
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Department of Psychology and Center of Neural Science, New York University, New York, United States
| | - Pia Ley
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Siddhart S Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Idris Shareef
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Ramesh Kekunnaya
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
14
|
Badde S, Myers CF, Yuval-Greenberg S, Carrasco M. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nat Commun 2020; 11:3341. [PMID: 32620746 PMCID: PMC7335189 DOI: 10.1038/s41467-020-17160-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 06/08/2020] [Indexed: 01/10/2023] Open
Abstract
The oculomotor system keeps the eyes steady in expectation of visual events. Here, recording microsaccades while people performed a tactile, frequency discrimination task enabled us to test whether the oculomotor system shows an analogous preparatory response for unrelated tactile events. We manipulated the temporal predictability of tactile targets using tactile cues, which preceded the target by either constant (high predictability) or variable (low predictability) time intervals. We find that microsaccades are inhibited prior to tactile targets and more so for constant than variable intervals, revealing a tight crossmodal link between tactile temporal expectation and oculomotor action. These findings portray oculomotor freezing as a marker of crossmodal temporal expectation. Moreover, microsaccades occurring around the tactile target presentation are associated with reduced task performance, suggesting that oculomotor freezing mitigates potential detrimental, concomitant effects of microsaccades and revealing a crossmodal coupling between tactile perception and oculomotor action.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA.
- Center for Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Caroline F Myers
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA
| | - Shlomit Yuval-Greenberg
- School of Psychological Sciences, Tel-Aviv University, Ramat Aviv, 6997801, Tel Aviv-Yafo, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Ramat Aviv, 6997801, Tel Aviv-Yafo, Israel
| | - Marisa Carrasco
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA
- Center for Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
15
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
16
|
Hense M, Badde S, Köhne S, Dziobek I, Röder B. Visual and Proprioceptive Influences on Tactile Spatial Processing in Adults with Autism Spectrum Disorders. Autism Res 2019; 12:1745-1757. [PMID: 31507084 DOI: 10.1002/aur.2202] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 06/25/2019] [Accepted: 08/14/2019] [Indexed: 12/19/2022]
Abstract
Children with autism spectrum disorders (ASDs) often exhibit altered representations of the external world. Consistently, when localizing touch, children with ASDs were less influenced than their peers by changes of the stimulated limb's location in external space [Wada et al., Scientific Reports 2015, 4(1), 5985]. However, given the protracted development of an external-spatial dominance in tactile processing in typically developing children, this difference might reflect a developmental delay rather than a set suppression of external space in ASDs. Here, adults with ASDs and matched control-participants completed (a) the tactile temporal order judgment (TOJ) task previously used to test external-spatial representation of touch in children with ASDs and (b) a tactile-visual cross-modal congruency (CC) task which assesses benefits of task-irrelevant visual stimuli on tactile localization in external space. In both experiments, participants localized tactile stimuli to the fingers of each hand, while holding their hands either crossed or uncrossed. Performance differences between hand postures reflect the influence of external-spatial codes. In both groups, tactile TOJ-performance markedly decreased when participants crossed their hands and CC-effects were especially large if the visual stimulus was presented at the same side of external space as the task-relevant touch. The absence of group differences was statistically confirmed using Bayesian statistical modeling: adults with ASDs weighted external-spatial codes comparable to typically developed adults during tactile and visual-tactile spatio-temporal tasks. Thus, atypicalities in the spatial coding of touch for children with ASDs appear to reflect a developmental delay rather than a stable characteristic of ASD. Autism Res 2019, 12: 1745-1757. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: A touched limb's location can be described twofold, with respect to the body (right hand) or the external world (right side). Children and adolescents with autism spectrum disorder (ASD) reportedly rely less than their peers on the external world. Here, adults with and without ASDs completed two tactile localization tasks. Both groups relied to the same degree on external world locations. This opens the possibility that the tendency to relate touch to the external world is typical in individuals with ASDs but emerges with a delay.
Collapse
Affiliation(s)
- Marlene Hense
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Stephanie Badde
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Department of Psychology, New York University, New York, New York
| | - Svenja Köhne
- Berlin School of Mind and Brain, Department of Psychology, Humboldt University Berlin, Berlin, Germany
| | - Isabel Dziobek
- Berlin School of Mind and Brain, Department of Psychology, Humboldt University Berlin, Berlin, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
17
|
Abstract
Where we perceive a touch putatively depends on topographic maps that code the touch's location on the skin [1] as well as its position in external space [2-5]. However, neither somatotopic nor external-spatial representations can account for atypical tactile percepts in some neurological patients and amputees; referral of touch to an absent or anaesthetized hand after stimulation of a foot [6, 7] or the contralateral hand [8-10] challenges the role of topographic representations when attributing touch to the limbs. Here, we show that even healthy adults systematically misattribute touch to other limbs. Participants received two tactile stimuli, each to a different limb-hand or foot-and reported which of all four limbs had been stimulated first. Hands and feet were either uncrossed or crossed to dissociate body-based and external-spatial representations [11-14]. Remarkably, participants regularly attributed the first touch to a limb that had received neither of the two stimuli. The erroneously reported, non-stimulated limb typically matched the correct limb with respect to limb type or body side. Touch was misattributed to non-stimulated limbs of the other limb type and body side only if they were placed at the correct limb's canonical (default) side of space. The touch's actual location in external space was irrelevant. These errors replicated across several contexts, and modeling linked them to incoming sensory evidence rather than to decision strategies. The results highlight the importance of the touched body part's identity and canonical location but challenge the role of external-spatial tactile representations when attributing touch to a limb.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Sciences, New York University, 6 Washington Place, New York, NY 10003, USA; Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Bielefeld University, Universitätsstrasse 25, 33615 Bielefeld, Germany; Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1, 33619 Bielefeld, Germany
| |
Collapse
|
18
|
Abstract
The ability to act quickly to a threat is a key skill for survival. Under awareness, threat-related emotional information, such as an angry or fearful face, has not only perceptual advantages but also guides rapid actions such as eye movements. Emotional information that is suppressed from awareness still confers perceptual and attentional benefits. However, it is unknown whether suppressed emotional information can directly guide actions, or whether emotional information has to enter awareness to do so. We suppressed emotional faces from awareness using continuous flash suppression and tracked eye gaze position. Under successful suppression, as indicated by objective and subjective measures, gaze moved towards fearful faces, but away from angry faces. Our findings reveal that: (1) threat-related emotional stimuli can guide eye movements in the absence of visual awareness; (2) threat-related emotional face information guides distinct oculomotor actions depending on the type of threat conveyed by the emotional expression.
Collapse
Affiliation(s)
- Petra Vetter
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| | - Stephanie Badde
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| | - Elizabeth A Phelps
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Harvard University, Cambridge, United States
| | - Marisa Carrasco
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| |
Collapse
|
19
|
Affiliation(s)
- Michael Landy
- Department of Psychology, New York UniversityCenter for Neural Science, New York University
| | - Stephanie Badde
- Department of Psychology, New York UniversityCenter for Neural Science, New York University
| |
Collapse
|
20
|
Vetter P, Badde S, Phelps E, Carrasco M. The eyes react to emotional faces in the absence of awareness. J Vis 2018. [DOI: 10.1167/18.10.613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Petra Vetter
- Dept. of Psychology, Royal Holloway University of LondonDept. of Psychology & Center for Neural Science, New York University
| | - Stephanie Badde
- Dept. of Psychology & Center for Neural Science, New York University
| | - Elizabeth Phelps
- Dept. of Psychology & Center for Neural Science, New York University
| | - Marisa Carrasco
- Dept. of Psychology & Center for Neural Science, New York University
| |
Collapse
|
21
|
Affiliation(s)
- Stephanie Badde
- Department of Psychology, New York UniversityCenter for Neural Science, New York University
| | | | | |
Collapse
|
22
|
Schubert JTW, Badde S, Röder B, Heed T. Task demands affect spatial reference frame weighting during tactile localization in sighted and congenitally blind adults. PLoS One 2017; 12:e0189067. [PMID: 29228023 PMCID: PMC5724835 DOI: 10.1371/journal.pone.0189067] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 11/17/2017] [Indexed: 11/18/2022] Open
Abstract
Task demands modulate tactile localization in sighted humans, presumably through weight adjustments in the spatial integration of anatomical, skin-based, and external, posture-based information. In contrast, previous studies have suggested that congenitally blind humans, by default, refrain from automatic spatial integration and localize touch using only skin-based information. Here, sighted and congenitally blind participants localized tactile targets on the palm or back of one hand, while ignoring simultaneous tactile distractors at congruent or incongruent locations on the other hand. We probed the interplay of anatomical and external location codes for spatial congruency effects by varying hand posture: the palms either both faced down, or one faced down and one up. In the latter posture, externally congruent target and distractor locations were anatomically incongruent and vice versa. Target locations had to be reported either anatomically (“palm” or “back” of the hand), or externally (“up” or “down” in space). Under anatomical instructions, performance was more accurate for anatomically congruent than incongruent target-distractor pairs. In contrast, under external instructions, performance was more accurate for externally congruent than incongruent pairs. These modulations were evident in sighted and blind individuals. Notably, distractor effects were overall far smaller in blind than in sighted participants, despite comparable target-distractor identification performance. Thus, the absence of developmental vision seems to be associated with an increased ability to focus tactile attention towards a non-spatially defined target. Nevertheless, that blind individuals exhibited effects of hand posture and task instructions in their congruency effects suggests that, like the sighted, they automatically integrate anatomical and external information during tactile localization. Moreover, spatial integration in tactile processing is, thus, flexibly adapted by top-down information—here, task instruction—even in the absence of developmental vision.
Collapse
Affiliation(s)
- Jonathan T. W. Schubert
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Stephanie Badde
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
- Department of Psychology, New York University, New York, United States of America
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany
- Center of Excellence in Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- * E-mail:
| |
Collapse
|
23
|
Badde S, Oh H, Landy M. Effect of prior knowledge on visual localization of tactile stimulation. J Vis 2016. [DOI: 10.1167/16.12.1190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
24
|
Landy M, Yang A, Badde S. Integration of somatosensory and proprioceptive sensation in the localization of touch in visual space. J Vis 2016. [DOI: 10.1167/16.12.1191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
25
|
Biller A, Pflugmann I, Badde S, Diem R, Wildemann B, Nagel AM, Jordan J, Benkhedah N, Kleesiek J. Sodium MRI in Multiple Sclerosis is Compatible with Intracellular Sodium Accumulation and Inflammation-Induced Hyper-Cellularity of Acute Brain Lesions. Sci Rep 2016; 6:31269. [PMID: 27507776 PMCID: PMC4978993 DOI: 10.1038/srep31269] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 07/18/2016] [Indexed: 12/25/2022] Open
Abstract
The cascade of inflammatory pathogenetic mechanisms in multiple sclerosis (MS) has no specific conventional MRI correlates. Clinicians therefore stipulate improved imaging specificity to define the pathological substrates of MS in vivo including mapping of intracellular sodium accumulation. Based upon preclinical findings and results of previous sodium MRI studies in MS patients we hypothesized that the fluid-attenuated sodium signal differs between acute and chronic lesions. We acquired brain sodium and proton MRI data of N = 29 MS patients; lesion type was defined by the presence or absence of contrast enhancement. N = 302 MS brain lesions were detected, and generalized linear mixed models were applied to predict lesion type based on sodium signals; thereby controlling for varying numbers of lesions among patients and confounding variables such as age and medication. Hierarchical model comparisons revealed that both sodium signals average tissue (χ2(1) = 27.89, p < 0.001) and fluid-attenuated (χ2(1) = 5.76, p = 0.016) improved lesion type classification. Sodium MRI signals were significantly elevated in acute compared to chronic lesions compatible with intracellular sodium accumulation in acute MS lesions. If confirmed in further studies, sodium MRI could serve as biomarker for diagnostic assessment of MS, and as readout parameter in clinical trials promoting attenuation of chronic inflammation.
Collapse
Affiliation(s)
- Armin Biller
- Multi-Dimensional Medical Imaging Lab, Department of Neuroradiology, University of Heidelberg, 69120 Heidelberg, Germany.,Department of Radiology, German Cancer Research Centre (DKFZ), 69120 Heidelberg, Germany
| | - Isabella Pflugmann
- Multi-Dimensional Medical Imaging Lab, Department of Neuroradiology, University of Heidelberg, 69120 Heidelberg, Germany
| | - Stephanie Badde
- Department of Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany.,Department of Psychology, New York University, New York, NY 10003, USA
| | - Ricarda Diem
- Department of Neurology, University of Heidelberg, 69120 Heidelberg, Germany
| | - Brigitte Wildemann
- Molecular Neuroimmunology Group, Department of Neurology, University of Heidelberg, 69120 Heidelberg, Germany
| | - Armin M Nagel
- Division of Medical Physics in Radiology, German Cancer Research Centre (DKFZ), 69120 Heidelberg, Germany.,Department of Diagnostic and Interventional Radiology, University Medical Centre Ulm, 89070 Ulm, Germany.,Institute of Radiology, University Hospital Erlangen, 91054 Erlangen, Germany
| | - J Jordan
- Division of Medical Physics in Radiology, German Cancer Research Centre (DKFZ), 69120 Heidelberg, Germany.,Institute of Radiology, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Nadia Benkhedah
- Division of Medical Physics in Radiology, German Cancer Research Centre (DKFZ), 69120 Heidelberg, Germany
| | - Jens Kleesiek
- Multi-Dimensional Medical Imaging Lab, Department of Neuroradiology, University of Heidelberg, 69120 Heidelberg, Germany.,Multidimensional Image Processing Group, HCI/IWR, University of Heidelberg, 69120 Heidelberg, Germany
| |
Collapse
|
26
|
Abstract
Touch is bound to the skin – that is, to the boundaries of the body. Yet, the activity of neurons in primary somatosensory cortex just mirrors the spatial distribution of the sensors across the skin. To determine the location of a tactile stimulus on the body, the body's spatial layout must be considered. Moreover, to relate touch to the external world, body posture has to be evaluated. In this review, we argue that posture is incorporated, by default, for any tactile stimulus. However, the relevance of the external location and, thus, its expression in behaviour, depends on various sensory and cognitive factors. Together, these factors imply that an external representation of touch dominates over the skin-based, anatomical when our focus is on the world rather than on our own body. We conclude that touch localization is a reconstructive process that is adjusted to the context while maintaining all available spatial information.
Collapse
Affiliation(s)
- Stephanie Badde
- a Department of Psychology , New York University , New York , NY , USA
| | - Tobias Heed
- b Faculty of Psychology and Human Movement Science , University of Hamburg , Hamburg , Germany
| |
Collapse
|
27
|
Biller A, Badde S, Nagel A, Neumann JO, Wick W, Hertenstein A, Bendszus M, Sahm F, Benkhedah N, Kleesiek J. Improved Brain Tumor Classification by Sodium MR Imaging: Prediction of IDH Mutation Status and Tumor Progression. AJNR Am J Neuroradiol 2015; 37:66-73. [PMID: 26494691 DOI: 10.3174/ajnr.a4493] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 06/09/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE MR imaging in neuro-oncology is challenging due to inherent ambiguities in proton signal behavior. Sodium-MR imaging may substantially contribute to the characterization of tumors because it reflects the functional status of the sodium-potassium pump and sodium channels. MATERIALS AND METHODS Sodium-MR imaging data of patients with treatment-naïve glioma WHO grades I-IV (n = 34; mean age, 51.29 ± 17.77 years) were acquired by using a 7T MR system. For acquisition of sodium-MR images, we applied density-adapted 3D radial projection reconstruction pulse sequences. Proton-MR imaging data were acquired by using a 3T whole-body system. RESULTS We demonstrated that the initial sodium signal of a treatment-naïve brain tumor is a significant predictor of isocitrate dehydrogenase (IDH) mutation status (P < .001). Moreover, independent of this correlation, the Cox proportional hazards model confirmed the sodium signal of treatment-naïve brain tumors as a predictor of progression (P = .003). Compared with the molecular signature of IDH mutation status, information criteria of model comparison revealed that the sodium signal is even superior to IDH in progression prediction. In addition, sodium-MR imaging provides a new approach to noninvasive tumor classification. The sodium signal of contrast-enhancing tumor portions facilitates differentiation among most glioma types (P < .001). CONCLUSIONS The information of sodium-MR imaging may help to classify neoplasias at an early stage, to reduce invasive tissue characterization such as stereotactic biopsy specimens, and overall to promote improved and individualized patient management in neuro-oncology by novel imaging signatures of brain tumors.
Collapse
Affiliation(s)
- A Biller
- From the Departments of Neuroradiology (A.B., M.B., J.K.) Departments of Radiology (A.B., J.K.)
| | - S Badde
- Department of Biological Psychology and Neuropsychology (S.B.), University of Hamburg, Hamburg, Germany
| | - A Nagel
- Medical Physics in Radiology (A.N., N.B.), German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | | | - W Wick
- Neuro-Oncology (W.W., A.H.)
| | | | - M Bendszus
- From the Departments of Neuroradiology (A.B., M.B., J.K.)
| | | | - N Benkhedah
- Medical Physics in Radiology (A.N., N.B.), German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - J Kleesiek
- From the Departments of Neuroradiology (A.B., M.B., J.K.) Multidimensional Image Processing Group (J.K.), HCI/IWR, University of Heidelberg, Heidelberg, Germany Departments of Radiology (A.B., J.K.)
| |
Collapse
|
28
|
Abstract
To estimate the location of a tactile stimulus, the brain seems to integrate different types of spatial information such as skin-based, anatomical coordinates and external, spatiotopic coordinates. The aim of the present study was to test whether the use of these coordinates is fixed, or whether they are weighted according to the task context. Participants made judgments about two tactile stimuli with different vibration characteristics, one applied to each hand. First, they always performed temporal order judgments (TOJ) of the tactile stimuli with respect to the stimulated hands that were either crossed or uncrossed. The resulting crossing effect, that is, impaired performance in crossed compared to uncrossed conditions, was used as a measure of reference frame weighting and was compared across conditions. Second, in dual judgment conditions participants subsequently made judgments about the stimulus vibration characteristics, either with respect to spatial location or with respect to temporal order. Responses in the spatial secondary task either accented anatomical (Experiment 1) or external (Experiment 2) coding. A TOJ crossing effect emerged in all conditions, and secondary tasks did not affect primary task performance in the uncrossed posture. Yet, the spatial secondary task resulted in improved crossed hands performance in the primary task, but only if the secondary judgment stressed the anatomical reference frame (Experiment 1), rather than the external reference frames (Experiment 2). Like the anatomically coded spatial secondary task, the temporal secondary task improved crossed hand performance of the primary task. The differential influence of the varying secondary tasks implies that integration weights assigned to the anatomical and external reference frames are not fixed. Rather, they are flexibly adjusted to the context, presumably through top-down modulation.
Collapse
Affiliation(s)
- Stephanie Badde
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| |
Collapse
|
29
|
Röder B, Heed T, Badde S. Development of the spatial coding of touch: ability vs. automaticity. Dev Sci 2014; 17:944-5. [DOI: 10.1111/desc.12186] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2014] [Accepted: 02/13/2014] [Indexed: 11/29/2022]
Affiliation(s)
- Brigitte Röder
- Biological Psychology and Neuropsychology; University of Hamburg; Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology; University of Hamburg; Germany
| | - Stephanie Badde
- Biological Psychology and Neuropsychology; University of Hamburg; Germany
| |
Collapse
|
30
|
Abstract
Touch location can be specified in different anatomical and external reference frames. Temporal order judgments (TOJs) in touch are known to be sensitive to conflict between reference frames. To establish which coordinates are involved in localizing touch to a finger, participants performed TOJ on tactile stimuli to 2 out of 4 possible fingers. We induced conflict between hand- and finger-related reference frames, as well as between anatomical and external spatial coding, by selectively crossing 2 fingers. TOJ performance was impaired when both stimuli were applied to crossed fingers, indicating conflict between anatomical and external finger coordinates. In addition, TOJs were impaired when stimuli were mapped to the same hand based on either anatomical or external spatial codes. Accordingly, we observed a benefit rather than impairment with finger crossing when both stimuli were applied to 1 hand. Complementary, participants systematically mislocalized touch to nonstimulated fingers of the targeted hand. The results indicate that touch localization for the fingers involves integration of several sources of spatial information: the anatomical location of the touched finger, its position in external space, the stimulated hand, and the hand to which the touch is (re)mapped in external space.
Collapse
|
31
|
|
32
|
Abstract
We examined adaptation to frequent conflict in a flanker task using event-related potentials (ERPs). A prominent model of cognitive control suggests the fronto-central N2 as an indicator of conflict monitoring. Based on this model we predicted (1) an increased N2 amplitude for incompatible compared to compatible stimuli and (2) that this difference in N2 amplitude would be less pronounced under conditions of frequent conflict (high cognitive control). In this model, adaptation to frequent conflict is implemented as modulation of early visual processing. Traditionally, variations in processing selectivity in the flanker task have been related to a zoom lens model of visual attention. Therefore, we further predicted (3) effects of conflict frequency on early visual ERP components of the event-related potential, and (4) generalization of conflict adaptation due to increased conflict frequency in the flanker task to other visuospatial tasks, intermixed within flanker task trials. Frequent conflict was associated with reduced flanker interference in response times (RTs) and error rate. Consistent with the literature, amplitude of the fronto-central N2 was larger and latency of the central P3 longer for incompatible stimuli. Both effects were smaller when conflict was frequent, supporting the notion of fronto-central N2 as indicator of conflict monitoring. Neither amplitude nor latency of the posterior P1, as index of early visual processing, was modulated by conflict frequency. Additionally, conflict frequency in the flanker task did not affect the pattern of RTs in a probe task. In sum, our results suggest that conflict adaptation operates in a task-specific manner and does not necessarily alter early information processing, that is, the spatial focus of visual attention.
Collapse
Affiliation(s)
- Sascha Purmann
- Departments of Psychology and Neurology, Ruprecht Karls University, Heidelberg, Germany
| | | | | | - Mike Wendt
- Helmut Schmidt University, Hamburg, Germany
- University of Hamburg, Germany
| |
Collapse
|