1
|
Kang KYL, Rosenkranz R, Altinsoy ME, Li SC. Cortical processes of multisensory plausibility modulation of vibrotactile perception in virtual environments in middled-aged and older adults. Sci Rep 2024; 14:13366. [PMID: 38862559 PMCID: PMC11166973 DOI: 10.1038/s41598-024-64054-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
Digital technologies, such as virtual or augmented reality, can potentially support neurocognitive functions of the aging populations worldwide and complement existing intervention methods. However, aging-related declines in the frontal-parietal network and dopaminergic modulation which progress gradually across the later periods of the adult lifespan may affect the processing of multisensory congruence and expectancy based contextual plausibility. We assessed hemodynamic brain responses while middle-aged and old adults experienced car-riding virtual-reality scenarios where the plausibility of vibrotactile stimulations was manipulated by delivering stimulus intensities that were either congruent or incongruent with the digitalized audio-visual contexts of the respective scenarios. Relative to previous findings observed in young adults, although highly plausible vibrotactile stimulations confirming with contextual expectations also elicited higher brain hemodynamic responses in middle-aged and old adults, this effect was limited to virtual scenarios with extreme expectancy violations. Moreover, individual differences in plausibility-related frontal activity did not correlate with plausibility violation costs in the sensorimotor cortex, indicating less systematic frontal context-based sensory filtering in older ages. These findings have practical implications for advancing digital technologies to support aging societies.
Collapse
Affiliation(s)
- Kathleen Y L Kang
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
- Faculty of Psychology, Technische Universität Dresden, Zellerscher Weg 17 Room A232/233, 01069, Dresden, Germany.
- School of Psychology and Vision Sciences, University of Leicester, Leicester, UK.
| | - Robert Rosenkranz
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
- Faculty of Electrical and Computer Engineering, Technische Universität Dresden, Dresden, Germany
| | - Mehmet Ercan Altinsoy
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
- Faculty of Electrical and Computer Engineering, Technische Universität Dresden, Dresden, Germany
| | - Shu-Chen Li
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
- Faculty of Psychology, Technische Universität Dresden, Zellerscher Weg 17 Room A232/233, 01069, Dresden, Germany.
| |
Collapse
|
2
|
Chen L. Synesthetic Correspondence: An Overview. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:101-119. [PMID: 38270856 DOI: 10.1007/978-981-99-7611-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Intramodal and cross-modal perceptual grouping based on the spatial proximity and temporal closeness between multiple sensory stimuli, as an operational principle has built a coherent and meaningful representation of the multisensory event/object. To implement and investigate the cross-modal perceptual grouping, researchers have employed excellent paradigms of spatial/temporal ventriloquism and cross-modal dynamic capture and have revealed the conditional constraints as well as the functional facilitations among various correspondence of sensory properties, with featured behavioral evidence, computational framework as well as brain oscillation patterns. Typically, synesthetic correspondence as a special type of cross-modal correspondence can shape the efficiency and effect-size of cross-modal interaction. For example, factors such as pitch/loudness in the auditory dimension with size/brightness in the visual dimension could modulate the strength of the cross-modal temporal capture. The empirical behavioral findings, as well as psychophysical and neurophysiological evidence to address the cross-modal perceptual grouping and synesthetic correspondence, were summarized in this review. Finally, the potential applications (such as artificial synesthesia device) and how synesthetic correspondence interface with semantics (sensory linguistics), as well as the promising research questions in this field have been discussed.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China.
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing, China.
- National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China.
| |
Collapse
|
3
|
Wen W, Charles L, Haggard P. Metacognition and sense of agency. Cognition 2023; 241:105622. [PMID: 37716313 DOI: 10.1016/j.cognition.2023.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 09/09/2023] [Accepted: 09/11/2023] [Indexed: 09/18/2023]
Abstract
Intelligent agents need to understand how they can change the world, and how they cannot change it, in order to make rational decisions for their forthcoming actions, and to adapt to their current environment. Previous research on the sense of agency, based largely on subjective ratings, failed to dissociate the sensitivity of sense of agency (i.e., the extent to which individual sense of agency tracks actual instrumental control over external events) from judgment criteria (i.e., the extent to which individuals self-attribute agency independent of their actual influence over external events). Furthermore, few studies have examined whether individuals have metacognitive access to the internal processes underlying the sense of agency. We developed a novel two-alternative-forced choice (2FAC) control detection task, in which participants identified which of two visual objects was more strongly controlled by their voluntary movement. The actual level of control over the target object was manipulated by adjusting the proportion of its motion that was driven by the participant's movement, compared to the proportion driven by a pre-recorded movement by another agent, using a staircase to hold 2AFC control detection accuracy at 70%. Participants identified which of the two visual objects they controlled, and also made a binary confidence judgment regarding their control detection judgment. We calculated a bias-free measure of first-order sensitivity (d') for detection control at any given level of participant's own movement. The proportion of pre-recorded movements determined by the stairecase could then be used as an index of control detection ability. We identified two distinct processes underlying first-order detection of control: one based on instantaneous sensory predictions for the current movement, and one based on detection of a regular motor-visual relation across a series of movements. Further, we found large individual differences across 40 particpants in metacognitive sensitivity (meta-d') even though first-order sensitivity of control detection was well controlled. Using structural equation modelling (SEM), we showed that metacognition was negatively correlated with the predictive process component of detection of control. This result is inconsistent with previous hypotheses that detection of control relies on metacognitive monitoring of a predictive circuit. Instead, it suggests that predictive mechanisms that compute sense of agency may operate unconsciously.
Collapse
Affiliation(s)
- Wen Wen
- Department of Psychology, Rikkyo University, 1-2-26 Kitano, Niiza, Saitama 352-8558, Japan; Department of Precision Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| | - Lucie Charles
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, London WC1N 3AZ, UK
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, London WC1N 3AZ, UK
| |
Collapse
|
4
|
Walker EY, Pohl S, Denison RN, Barack DL, Lee J, Block N, Ma WJ, Meyniel F. Studying the neural representations of uncertainty. Nat Neurosci 2023; 26:1857-1867. [PMID: 37814025 DOI: 10.1038/s41593-023-01444-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/30/2023] [Indexed: 10/11/2023]
Abstract
The study of the brain's representations of uncertainty is a central topic in neuroscience. Unlike most quantities of which the neural representation is studied, uncertainty is a property of an observer's beliefs about the world, which poses specific methodological challenges. We analyze how the literature on the neural representations of uncertainty addresses those challenges and distinguish between 'code-driven' and 'correlational' approaches. Code-driven approaches make assumptions about the neural code for representing world states and the associated uncertainty. By contrast, correlational approaches search for relationships between uncertainty and neural activity without constraints on the neural representation of the world state that this uncertainty accompanies. To compare these two approaches, we apply several criteria for neural representations: sensitivity, specificity, invariance and functionality. Our analysis reveals that the two approaches lead to different but complementary findings, shaping new research questions and guiding future experiments.
Collapse
Affiliation(s)
- Edgar Y Walker
- Department of Physiology and Biophysics, Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - Stephan Pohl
- Department of Philosophy, New York University, New York, NY, USA
| | - Rachel N Denison
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
| | - David L Barack
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Department of Philosophy, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Lee
- Center for Neural Science, New York University, New York, NY, USA
| | - Ned Block
- Department of Philosophy, New York University, New York, NY, USA
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif-sur-Yvette, France.
| |
Collapse
|
5
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
6
|
Fetsch CR, Noppeney U. How the brain controls decision making in a multisensory world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220332. [PMID: 37545306 PMCID: PMC10404917 DOI: 10.1098/rstb.2022.0332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 08/08/2023] Open
Abstract
Sensory systems evolved to provide the organism with information about the environment to guide adaptive behaviour. Neuroscientists and psychologists have traditionally considered each sense independently, a legacy of Aristotle and a natural consequence of their distinct physical and anatomical bases. However, from the point of view of the organism, perception and sensorimotor behaviour are fundamentally multi-modal; after all, each modality provides complementary information about the same world. Classic studies revealed much about where and how sensory signals are combined to improve performance, but these tended to treat multisensory integration as a static, passive, bottom-up process. It has become increasingly clear how this approach falls short, ignoring the interplay between perception and action, the temporal dynamics of the decision process and the many ways by which the brain can exert top-down control of integration. The goal of this issue is to highlight recent advances on these higher order aspects of multisensory processing, which together constitute a mainstay of our understanding of complex, natural behaviour and its neural basis. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, Netherlands
| |
Collapse
|
7
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
8
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
9
|
Meijer D, Noppeney U. Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220348. [PMID: 37545307 PMCID: PMC10404922 DOI: 10.1098/rstb.2022.0348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 07/02/2023] [Indexed: 08/08/2023] Open
Abstract
Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e.g. auditory B/P with visual G/K). Observers reported their perceived auditory syllable, the causal structure and confidence for each judgement. Observers were more accurate and confident on congruent than unisensory trials. Their perceptual and causal confidence were tightly related over trials as predicted by the interactive nature of perceptual and causal inference. Further, observers assigned comparable perceptual and causal confidence to veridical 'G/K' percepts on audiovisual congruent trials and their causal and perceptual metamers on McGurk trials (i.e. illusory 'G/K' percepts). Thus, observers metacognitively evaluate the integrated audiovisual percept with limited access to the conflicting unisensory stimulus components on McGurk trials. Collectively, our results suggest that observers form meaningful perceptual and causal confidence judgements about multisensory scenes that are qualitatively consistent with principles of Bayesian causal inference. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- David Meijer
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Wien, Austria
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| |
Collapse
|
10
|
West RK, Harrison WJ, Matthews N, Mattingley JB, Sewell DK. Modality independent or modality specific? Common computations underlie confidence judgements in visual and auditory decisions. PLoS Comput Biol 2023; 19:e1011245. [PMID: 37450502 PMCID: PMC10426961 DOI: 10.1371/journal.pcbi.1011245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 08/15/2023] [Accepted: 06/06/2023] [Indexed: 07/18/2023] Open
Abstract
The mechanisms that enable humans to evaluate their confidence across a range of different decisions remain poorly understood. To bridge this gap in understanding, we used computational modelling to investigate the processes that underlie confidence judgements for perceptual decisions and the extent to which these computations are the same in the visual and auditory modalities. Participants completed two versions of a categorisation task with visual or auditory stimuli and made confidence judgements about their category decisions. In each modality, we varied both evidence strength, (i.e., the strength of the evidence for a particular category) and sensory uncertainty (i.e., the intensity of the sensory signal). We evaluated several classes of computational models which formalise the mapping of evidence strength and sensory uncertainty to confidence in different ways: 1) unscaled evidence strength models, 2) scaled evidence strength models, and 3) Bayesian models. Our model comparison results showed that across tasks and modalities, participants take evidence strength and sensory uncertainty into account in a way that is consistent with the scaled evidence strength class. Notably, the Bayesian class provided a relatively poor account of the data across modalities, particularly in the more complex categorisation task. Our findings suggest that a common process is used for evaluating confidence in perceptual decisions across domains, but that the parameter settings governing the process are tuned differently in each modality. Overall, our results highlight the impact of sensory uncertainty on confidence and the unity of metacognitive processing across sensory modalities.
Collapse
Affiliation(s)
- Rebecca K. West
- School of Psychology, University of Queensland, Queensland, Australia
| | - William J. Harrison
- School of Psychology, University of Queensland, Queensland, Australia
- Queensland Brain Institute, University of Queensland, Queensland, Australia
| | - Natasha Matthews
- School of Psychology, University of Queensland, Queensland, Australia
| | - Jason B. Mattingley
- School of Psychology, University of Queensland, Queensland, Australia
- Queensland Brain Institute, University of Queensland, Queensland, Australia
- Canadian Institute for Advanced Research, Toronto, Canada
| | - David K. Sewell
- School of Psychology, University of Queensland, Queensland, Australia
| |
Collapse
|
11
|
Huo H, Liu X, Tang Z, Dong Y, Zhao D, Chen D, Tang M, Qiao X, Du X, Guo J, Wang J, Fan Y. Interhemispheric multisensory perception and Bayesian causal inference. iScience 2023; 26:106706. [PMID: 37250338 PMCID: PMC10214730 DOI: 10.1016/j.isci.2023.106706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/07/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
In daily life, our brain needs to eliminate irrelevant signals and integrate relevant signals to facilitate natural interactions with the surrounding. Previous study focused on paradigms without effect of dominant laterality and found that human observers process multisensory signals consistent with Bayesian causal inference (BCI). However, most human activities are of bilateral interaction involved in processing of interhemispheric sensory signals. It remains unclear whether the BCI framework also fits to such activities. Here, we presented a bilateral hand-matching task to understand the causal structure of interhemispheric sensory signals. In this task, participants were asked to match ipsilateral visual or proprioceptive cues with the contralateral hand. Our results suggest that interhemispheric causal inference is most derived from the BCI framework. The interhemispheric perceptual bias may vary strategy models to estimate the contralateral multisensory signals. The findings help to understand how the brain processes the uncertainty information coming from interhemispheric sensory signals.
Collapse
Affiliation(s)
- Hongqiang Huo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaoyu Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| | - Zhili Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Ying Dong
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Di Zhao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Duo Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Min Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaofeng Qiao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xin Du
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jieyi Guo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jinghui Wang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- School of Medical Science and Engineering Medicine, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| |
Collapse
|
12
|
Klever L, Beyvers MC, Fiehler K, Mamassian P, Billino J. Cross-modal metacognition: Visual and tactile confidence share a common scale. J Vis 2023; 23:3. [PMID: 37140913 PMCID: PMC10166118 DOI: 10.1167/jov.23.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023] Open
Abstract
Humans can judge the quality of their perceptual decisions-an ability known as perceptual confidence. Previous work suggested that confidence can be evaluated on an abstract scale that can be sensory modality-independent or even domain-general. However, evidence is still scarce on whether confidence judgments can be directly made across visual and tactile decisions. Here, we investigated in a sample of 56 adults whether visual and tactile confidence share a common scale by measuring visual contrast and vibrotactile discrimination thresholds in a confidence-forced choice paradigm. Confidence judgments were made about the correctness of the perceptual decision between two trials involving either the same or different modalities. To estimate confidence efficiency, we compared discrimination thresholds obtained from all trials to those from trials judged to be relatively more confident. We found evidence for metaperception because higher confidence was associated with better perceptual performance in both modalities. Importantly, participants were able to judge their confidence across modalities without any costs in metaperceptual sensitivity and only minor changes in response times compared to unimodal confidence judgments. In addition, we were able to predict cross-modal confidence well from unimodal judgments. In conclusion, our findings show that perceptual confidence is computed on an abstract scale and that it can assess the quality of our decisions across sensory modalities.
Collapse
Affiliation(s)
- Lena Klever
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, École Normale Supérieure, PSL University, Paris, France
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
13
|
Michel M. Confidence in consciousness research. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1628. [PMID: 36205300 DOI: 10.1002/wcs.1628] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 09/14/2022] [Accepted: 09/21/2022] [Indexed: 11/06/2022]
Abstract
To study (un)conscious perception and test hypotheses about consciousness, researchers need procedures for determining whether subjects consciously perceive stimuli or not. This article is an introduction to a family of procedures called "confidence-based procedures," which consist in interpreting metacognitive indicators as indicators of consciousness. I assess the validity and accuracy of these procedures, and answer a series of common objections to their use in consciousness research. I conclude that confidence-based procedures are valid for assessing consciousness, and, in most cases, accurate enough for our practical and scientific purposes. This article is categorized under: Psychology > Perception and Psychophysics Philosophy > Consciousness.
Collapse
Affiliation(s)
- Matthias Michel
- Center for Mind, Brain and Consciousness, New York University, New York, New York, USA
| |
Collapse
|
14
|
Sun C, Liu X, Jiang Q, Ye X, Zhu X, Li RW. Emerging electrolyte-gated transistors for neuromorphic perception. SCIENCE AND TECHNOLOGY OF ADVANCED MATERIALS 2023; 24:2162325. [PMID: 36684849 PMCID: PMC9848240 DOI: 10.1080/14686996.2022.2162325] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 05/31/2023]
Abstract
With the rapid development of intelligent robotics, the Internet of Things, and smart sensor technologies, great enthusiasm has been devoted to developing next-generation intelligent systems for the emulation of advanced perception functions of humans. Neuromorphic devices, capable of emulating the learning, memory, analysis, and recognition functions of biological neural systems, offer solutions to intelligently process sensory information. As one of the most important neuromorphic devices, Electrolyte-gated transistors (EGTs) have shown great promise in implementing various vital neural functions and good compatibility with sensors. This review introduces the materials, operating principle, and performances of EGTs, followed by discussing the recent progress of EGTs for synapse and neuron emulation. Integrating EGTs with sensors that faithfully emulate diverse perception functions of humans such as tactile and visual perception is discussed. The challenges of EGTs for further development are given.
Collapse
Affiliation(s)
- Cui Sun
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xuerong Liu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Qian Jiang
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoyu Ye
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojian Zhu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Run-Wei Li
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Congruence-based contextual plausibility modulates cortical activity during vibrotactile perception in virtual multisensory environments. Commun Biol 2022; 5:1360. [PMID: 36509971 PMCID: PMC9744907 DOI: 10.1038/s42003-022-04318-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/29/2022] [Indexed: 12/14/2022] Open
Abstract
How congruence cues and congruence-based expectations may together shape perception in virtual reality (VR) still need to be unravelled. We linked the concept of plausibility used in VR research with congruence-based modulation by assessing brain responses while participants experienced vehicle riding experiences in VR scenarios. Perceptual plausibility was manipulated by sensory congruence, with multisensory stimulations confirming with common expectations of road scenes being plausible. We hypothesized that plausible scenarios would elicit greater cortical responses. The results showed that: (i) vibrotactile stimulations at expected intensities, given embedded audio-visual information, engaged greater cortical activities in frontal and sensorimotor regions; (ii) weaker plausible stimulations resulted in greater responses in the sensorimotor cortex than stronger but implausible stimulations; (iii) frontal activities under plausible scenarios negatively correlated with plausibility violation costs in the sensorimotor cortex. These results potentially indicate frontal regulation of sensory processing and extend previous evidence of contextual modulation to the tactile sense.
Collapse
|
16
|
Tajadura-Jiménez A, Crucianelli L, Zheng R, Cheng C, Ley-Flores J, Borda-Más M, Bianchi-Berthouze N, Fotopoulou A. Body weight distortions in an auditory-driven body illusion in subclinical and clinical eating disorders. Sci Rep 2022; 12:20031. [PMID: 36414765 PMCID: PMC9681758 DOI: 10.1038/s41598-022-24452-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 11/15/2022] [Indexed: 11/23/2022] Open
Abstract
Previous studies suggest a stronger influence of visual signals on body image in individuals with eating disorders (EDs) than healthy controls; however, the influence of other exteroceptive sensory signals remains unclear. Here we used an illusion relying on auditory (exteroceptive) signals to manipulate body size/weight perceptions and investigated whether the mechanisms integrating sensory signals into body image are altered in subclinical and clinical EDs. Participants' footstep sounds were altered to seem produced by lighter or heavier bodies. Across two experiments, we tested healthy women assigned to three groups based on self-reported Symptomatology of EDs (SED), and women with Anorexia Nervosa (AN), and used self-report, body-visualization, and behavioural (gait) measures. As with visual bodily illusions, we predicted stronger influence of auditory signals, leading to an enhanced body-weight illusion, in people with High-SED and AN. Unexpectedly, High-SED and AN participants displayed a gait typical of heavier bodies and a widest/heaviest visualized body in the 'light' footsteps condition. In contrast, Low-SED participants showed these patterns in the 'heavy' footsteps condition. Self-reports did not show group differences. The results of this pilot study suggest disturbances in the sensory integration mechanisms, rather than purely visually-driven body distortions, in subclinical/clinical EDs, opening opportunities for the development of novel diagnostic/therapeutic tools.
Collapse
Affiliation(s)
- Ana Tajadura-Jiménez
- grid.7840.b0000 0001 2168 9183DEI Interactive Systems Group, Department of Computer Science and Engineering, Universidad Carlos III de, Av. de La Universidad, 30, 28911 Madrid, Leganés, Spain ,grid.83440.3b0000000121901201UCL Interaction Centre (UCLIC), University College London, London, UK
| | - Laura Crucianelli
- grid.4714.60000 0004 1937 0626Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden ,grid.83440.3b0000000121901201Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Rebecca Zheng
- grid.83440.3b0000000121901201UCL Interaction Centre (UCLIC), University College London, London, UK
| | - Chloe Cheng
- grid.83440.3b0000000121901201UCL Interaction Centre (UCLIC), University College London, London, UK
| | - Judith Ley-Flores
- grid.7840.b0000 0001 2168 9183DEI Interactive Systems Group, Department of Computer Science and Engineering, Universidad Carlos III de, Av. de La Universidad, 30, 28911 Madrid, Leganés, Spain
| | - Mercedes Borda-Más
- grid.9224.d0000 0001 2168 1229Departamento de Personalidad, Evaluación y Tratamiento Psicológico, Universidad de Sevilla, Seville, Spain
| | - Nadia Bianchi-Berthouze
- grid.83440.3b0000000121901201UCL Interaction Centre (UCLIC), University College London, London, UK
| | - Aikaterini Fotopoulou
- grid.83440.3b0000000121901201Department of Clinical, Educational and Health Psychology, University College London, London, UK
| |
Collapse
|
17
|
Rahnev D, Balsdon T, Charles L, de Gardelle V, Denison R, Desender K, Faivre N, Filevich E, Fleming SM, Jehee J, Lau H, Lee ALF, Locke SM, Mamassian P, Odegaard B, Peters M, Reyes G, Rouault M, Sackur J, Samaha J, Sergent C, Sherman MT, Siedlecka M, Soto D, Vlassova A, Zylberberg A. Consensus Goals in the Field of Visual Metacognition. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:1746-1765. [PMID: 35839099 PMCID: PMC9633335 DOI: 10.1177/17456916221075615] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Despite the tangible progress in psychological and cognitive sciences over the last several years, these disciplines still trail other more mature sciences in identifying the most important questions that need to be solved. Reaching such consensus could lead to greater synergy across different laboratories, faster progress, and increased focus on solving important problems rather than pursuing isolated, niche efforts. Here, 26 researchers from the field of visual metacognition reached consensus on four long-term and two medium-term common goals. We describe the process that we followed, the goals themselves, and our plans for accomplishing these goals. If this effort proves successful within the next few years, such consensus building around common goals could be adopted more widely in psychological science.
Collapse
Affiliation(s)
| | - Tarryn Balsdon
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | - Lucie Charles
- Institute of Cognitive Neuroscience, University College London, UK
| | | | - Rachel Denison
- Department of Psychological and Brain Sciences, Boston University, USA
| | | | - Nathan Faivre
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Elisa Filevich
- Bernstein Center for Computational Neuroscience Berlin, Philippstraβe 13 Haus 6, 10115 Berlin, Germany
| | - Stephen M. Fleming
- Department of Experimental Psychology and Wellcome Centre for Human Neuroimaging, University College London, UK
| | | | | | - Alan L. F. Lee
- Department of Applied Psychology and Wofoo Joseph Lee Consulting and Counselling Psychology Research Centre, Lingnan University, Hong Kong
| | - Shannon M. Locke
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | - Pascal Mamassian
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | - Brian Odegaard
- Department of Psychology, University of Florida, Gainesville, FL USA
| | - Megan Peters
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA USA
| | - Gabriel Reyes
- Facultad de Psicología, Universidad del Desarrollo, Santiago, Chile
| | - Marion Rouault
- Département d’Études Cognitives, École Normale Supérieure, Université Paris Sciences & Lettres (PSL University), Paris, France
| | - Jerome Sackur
- Département d’Études Cognitives, École Normale Supérieure, Université Paris Sciences & Lettres (PSL University), Paris, France
| | - Jason Samaha
- Department of Psychology, University of California, Santa Cruz
| | - Claire Sergent
- Université de Paris, INCC UMR 8002, 75006, Paris, France
| | - Maxine T. Sherman
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Marta Siedlecka
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - David Soto
- Basque Center on Cognition Brain and Language, San Sebastián, Spain. Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Alexandra Vlassova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Ariel Zylberberg
- Department of Brain and Cognitive Sciences, University of Rochester, USA
| |
Collapse
|
18
|
Deroy O, Rappe S. The clear and not so clear signatures of perceptual reality in the Bayesian brain. Conscious Cogn 2022; 103:103379. [PMID: 35868197 DOI: 10.1016/j.concog.2022.103379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 04/19/2022] [Accepted: 05/23/2022] [Indexed: 11/17/2022]
Abstract
In a Bayesian brain, every perceptual decision will take into account internal priors as well as new incoming evidence. A reality monitoring system-eventually providing the agent us with a subjective sense of reality avoids us them being confused about whether our experience is perceptual or imagined. Yet not all confusions we experience mean that we wonder wonder whether we may be imagining: some confused experiences feel clearly perceptual but still feel not right. What happens in such confused perceptions, and can the Bayesian brain explain this kind of confusion? In this paper, we offer a characterisation of perceptual confusion and argue that it requires our subjective sense of reality to be a composite of several subjective markers, including a categorical one that can clearly identify an experience as perceptual and connecting us to reality. Our composite account makes new predictions regarding the robustness, the non-linear development and the possible breakdowns of the sense of reality in perception.
Collapse
Affiliation(s)
- Ophelia Deroy
- Faculty of Philosophy, Ludwig Maximilian University, Munich, Germany; Munich Center for Neuroscience, Ludwig Maximilian University, Munich, Germany; Institute of Philosophy, School of Advanced Study, University of London, London, UK.
| | - Sofiia Rappe
- Graduate School in Neuroscience, Ludwig Maximilian University, Munich, Germany
| |
Collapse
|
19
|
Perceptual changes after learning of an arbitrary mapping between vision and hand movements. Sci Rep 2022; 12:11427. [PMID: 35794174 PMCID: PMC9259624 DOI: 10.1038/s41598-022-15579-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/27/2022] [Indexed: 11/21/2022] Open
Abstract
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.
Collapse
|
20
|
Seow TXF, Rouault M, Gillan CM, Fleming SM. Reply to: Metacognition, Adaptation, and Mental Health. Biol Psychiatry 2022; 91:e33-e34. [PMID: 34961624 DOI: 10.1016/j.biopsych.2021.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 11/03/2021] [Indexed: 11/26/2022]
Affiliation(s)
- Tricia X F Seow
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom; Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom.
| | - Marion Rouault
- Institut Jean Nicod, Département d'études cognitives, École normale supérieure, École des hautes études en sciences sociales, Centre National de la Recherche Scientifique, Paris Sciences et Lettres University, Paris, France; Laboratoire de neurosciences cognitives et computationnelles, Département d'études cognitives, École normale supérieure, Institut National de la Santé et de la Recherche Médicale, Paris Sciences et Lettres University, Paris, France.
| | - Claire M Gillan
- School of Psychology, Trinity College Dublin, Dublin, Ireland; Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Stephen M Fleming
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom; Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom; Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
21
|
Foucault C, Meyniel F. Gated recurrence enables simple and accurate sequence prediction in stochastic, changing, and structured environments. eLife 2021; 10:71801. [PMID: 34854377 PMCID: PMC8735865 DOI: 10.7554/elife.71801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 12/01/2021] [Indexed: 11/13/2022] Open
Abstract
From decision making to perception to language, predicting what is coming next is crucial. It is also challenging in stochastic, changing, and structured environments; yet the brain makes accurate predictions in many situations. What computational architecture could enable this feat? Bayesian inference makes optimal predictions but is prohibitively difficult to compute. Here, we show that a specific recurrent neural network architecture enables simple and accurate solutions in several environments. This architecture relies on three mechanisms: gating, lateral connections, and recurrent weight training. Like the optimal solution and the human brain, such networks develop internal representations of their changing environment (including estimates of the environment’s latent variables and the precision of these estimates), leverage multiple levels of latent structure, and adapt their effective learning rate to changes without changing their connection weights. Being ubiquitous in the brain, gated recurrence could therefore serve as a generic building block to predict in real-life environments.
Collapse
Affiliation(s)
- Cédric Foucault
- INSERM, CEA, Université Paris-Saclay, Gif sur Yvette, France
| | | |
Collapse
|
22
|
Pálffy Z, Farkas K, Csukly G, Kéri S, Polner B. Cross-modal auditory priors drive the perception of bistable visual stimuli with reliable differences between individuals. Sci Rep 2021; 11:16943. [PMID: 34417496 PMCID: PMC8379237 DOI: 10.1038/s41598-021-96198-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 05/27/2021] [Indexed: 11/17/2022] Open
Abstract
It is a widely held assumption that the brain performs perceptual inference by combining sensory information with prior expectations, weighted by their uncertainty. A distinction can be made between higher- and lower-level priors, which can be manipulated with associative learning and sensory priming, respectively. Here, we simultaneously investigate priming and the differential effect of auditory vs. visual associative cues on visual perception, and we also examine the reliability of individual differences. Healthy individuals (N = 29) performed a perceptual inference task twice with a one-week delay. They reported the perceived direction of motion of dot pairs, which were preceded by a probabilistic visuo-acoustic cue. In 30% of the trials, motion direction was ambiguous, and in half of these trials, the auditory versus the visual cue predicted opposing directions. Cue-stimulus contingency could change every 40 trials. On ambiguous trials where the visual and the auditory cue predicted conflicting directions of motion, participants made more decisions consistent with the prediction of the acoustic cue. Increased predictive processing under stimulus uncertainty was indicated by slower responses to ambiguous (vs. non-ambiguous) stimuli. Furthermore, priming effects were also observed in that perception of ambiguous stimuli was influenced by perceptual decisions on the previous ambiguous and unambiguous trials as well. Critically, behavioural effects had substantial inter-individual variability which showed high test-retest reliability (intraclass correlation coefficient (ICC) > 0.78). Overall, higher-level priors based on auditory (vs. visual) information had greater influence on visual perception, and lower-level priors were also in action. Importantly, we observed large and stable differences in various aspects of task performance. Computational modelling combined with neuroimaging could allow testing hypotheses regarding the potential mechanisms causing these behavioral effects. The reliability of the behavioural differences implicates that such perceptual inference tasks could be valuable tools during large-scale biomarker and neuroimaging studies.
Collapse
Affiliation(s)
- Zsófia Pálffy
- Department of Cognitive Science, Budapest University of Technology and Economics, 1 Egry József utca, Building T, Floor 5, Budapest, 1111, Hungary.
| | - Kinga Farkas
- Department of Psychiatry and Psychotherapy, Semmelweis University, Budapest, Hungary
| | - Gábor Csukly
- Department of Psychiatry and Psychotherapy, Semmelweis University, Budapest, Hungary
| | - Szabolcs Kéri
- Department of Cognitive Science, Budapest University of Technology and Economics, 1 Egry József utca, Building T, Floor 5, Budapest, 1111, Hungary
- National Institute of Psychiatry and Addictions, Budapest, Hungary
| | - Bertalan Polner
- Department of Cognitive Science, Budapest University of Technology and Economics, 1 Egry József utca, Building T, Floor 5, Budapest, 1111, Hungary
| |
Collapse
|
23
|
Abstract
Using a video game platform, we examined how vision-based decision making was affected by a concurrent, potentially conflicting auditory stimulus. Electroencephalographic responses showed that by 150 milliseconds of stimulus onset, the brain had detected the conflict between visual and auditory stimuli. Systematically reducing the intertrial interval (ITI), which subjects described as stressful, undermined decision making. Subjects' arterial pulse variance decreased along with ITI, signaling increased parasympathetic influence on the heart. When successive trials required a shift in processing mode, short ITIs significantly boosted one trial's influence on the next, suggesting that stress reduces cognitive flexibility. Finally, our study demonstrates the heart's and the brain's important influence on decision making.
Collapse
Affiliation(s)
- Yile Sun
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, United States
| | - Robert Sekuler
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, United States
| |
Collapse
|
24
|
Delong P, Noppeney U. Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness. Sci Rep 2021; 11:10832. [PMID: 34035358 PMCID: PMC8149651 DOI: 10.1038/s41598-021-90183-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 04/22/2021] [Indexed: 11/09/2022] Open
Abstract
Information integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward-backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture's visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers' awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.
Collapse
Affiliation(s)
- Patrycja Delong
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
25
|
On the relevance of task instructions for the influence of action on perception. Atten Percept Psychophys 2021; 83:2625-2633. [PMID: 33939156 PMCID: PMC8302516 DOI: 10.3758/s13414-021-02309-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2021] [Indexed: 11/08/2022]
Abstract
The present study explored how task instructions mediate the impact of action on perception. Participants saw a target object while performing finger movements. Then either the size of the target or the size of the adopted finger postures was judged. The target judgment was attracted by the adopted finger posture indicating sensory integration of body-related and visual signals. The magnitude of integration, however, depended on how the task was initially described. It was substantially larger when the experimental instructions indicated that finger movements and the target object relate to the same event than when they suggested that they are unrelated. This outcome highlights the role of causal inference processes in the emergence of action specific influences in perception.
Collapse
|
26
|
Impact of proprioception on the perceived size and distance of external objects in a virtual action task. Psychon Bull Rev 2021; 28:1191-1201. [PMID: 33782919 PMCID: PMC8367880 DOI: 10.3758/s13423-021-01915-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2021] [Indexed: 11/08/2022]
Abstract
Previous research has revealed changes in the perception of objects due to changes of object-oriented actions. In present study, we varied the arm and finger postures in the context of a virtual reaching and grasping task and tested whether this manipulation can simultaneously affect the perceived size and distance of external objects. Participants manually controlled visual cursors, aiming at reaching and enclosing a distant target object, and judged the size and distance of this object. We observed that a visual-proprioceptive discrepancy introduced during the reaching part of the action simultaneously affected the judgments of target distance and of target size (Experiment 1). A related variation applied to the grasping part of the action affected the judgments of size, but not of distance of the target (Experiment 2). These results indicate that perceptual effects observed in the context of actions can directly arise through sensory integration of multimodal redundant signals and indirectly through perceptual constancy mechanisms.
Collapse
|
27
|
Hilla Y, von Mankowski J, Föcker J, Sauseng P. Faster Visual Information Processing in Video Gamers Is Associated With EEG Alpha Amplitude Modulation. Front Psychol 2020; 11:599788. [PMID: 33363498 PMCID: PMC7753097 DOI: 10.3389/fpsyg.2020.599788] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 11/04/2020] [Indexed: 12/03/2022] Open
Abstract
Video gaming, specifically action video gaming, seems to improve a range of cognitive functions. The basis for these improvements may be attentional control in conjunction with reward-related learning to amplify the execution of goal-relevant actions while suppressing goal-irrelevant actions. Given that EEG alpha power reflects inhibitory processing, a core component of attentional control, it might represent the electrophysiological substrate of cognitive improvement in video gaming. The aim of this study was to test whether non-video gamers (NVGs), non-action video gamers (NAVGs) and action video gamers (AVGs) exhibit differences in EEG alpha power, and whether this might account for differences in visual information processing as operationalized by the theory of visual attention (TVA). Forty male volunteers performed a visual short-term memory paradigm where they memorized shape stimuli depicted on circular stimulus displays at six different exposure durations while their EEGs were recorded. Accuracy data was analyzed using TVA-algorithms. There was a positive correlation between the extent of post-stimulus EEG alpha power attenuation (10–12 Hz) and speed of information processing across all participants. Moreover, both EEG alpha power attenuation and speed of information processing were modulated by an interaction between group affiliation and time on task, indicating that video gamers showed larger EEG alpha power attenuations and faster information processing over time than NVGs – with AVGs displaying the largest increase. An additional regression analysis affirmed this observation. From this we concluded that EEG alpha power might be a promising neural substrate for explaining cognitive improvement in video gaming.
Collapse
Affiliation(s)
- Yannik Hilla
- Research Unit of Biological Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Jörg von Mankowski
- Chair of Communication Networks, Technische Universität München, Munich, Germany
| | - Julia Föcker
- School of Psychology, College of Social Sciences, University of Lincoln, Lincoln, United Kingdom
| | - Paul Sauseng
- Research Unit of Biological Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
28
|
Abstract
Individuals have the ability to extract summary statistics from multiple items presented simultaneously. However, it is unclear yet whether we have insight into the process of ensemble coding. The aim of this study was to investigate metacognition about average face perception. Participants saw a group of four faces presented for 2 s or 5 s, and then they were asked to judge whether the following test face was present in the previous set (Experiment 1), or whether the test face was the average of the four member faces (Experiment 2). After each response, participants rated their confidence. Replicating previous findings, there was substantial endorsement for the average face derived from the four member faces in Experiment 1, even though it was not present in the set. When judging faces that had been presented in the set, confidence correlated positively with accuracy, providing evidence for metacognitive awareness of previously studied faces. Importantly, there was a negative confidence-accuracy relationship for judging average faces when duration was 2 s, and a near-zero relationship when duration was 5 s. By contrast, when the average face had to be identified explicitly in Experiment 2, performance was above chance level and there was a positive correlation between confidence and accuracy. These results suggest that people have metacognitive awareness about average face perception when averaging is required explicitly, but they lack insight into the averaging process when member identification is required.
Collapse
|
29
|
Yamamoto K. Cue integration as a common mechanism for action and outcome bindings. Cognition 2020; 205:104423. [PMID: 32838958 DOI: 10.1016/j.cognition.2020.104423] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 07/28/2020] [Accepted: 07/29/2020] [Indexed: 10/23/2022]
Abstract
When a voluntary action is followed by a sensory outcome, their timings are perceived to shift toward each other compared to when they were generated independently. Recent studies have tried to explain this temporal binding effect based on the cue integration theory, in which the timing of action and outcome are estimated as a precision-weighted average of their individual estimates, although distinct results were obtained between the binding of action and outcome. This study demonstrates that cue integration underlies both action and outcome bindings, using visual changes as action outcomes. Participants viewed a moving clock presented on a screen to report the onset time of their action or the feature changes of visual objects that were relevant or irrelevant to the clock movement. The results revealed that the precision of outcome timing judgment was different based on the object that underwent a feature change. Moreover, consistent with the theory's prediction, the perceptual shifts of action and outcome timings were larger and smaller, respectively, when the precision of outcome timing judgments was higher. These results suggest that cue integration serves as a common mechanism in action and outcome bindings.
Collapse
Affiliation(s)
- Kentaro Yamamoto
- Faculty of Human-Environment Studies, Kyushu University, Fukuoka, Japan; Faculty of Science and Engineering, Waseda University, Tokyo, Japan.
| |
Collapse
|
30
|
Gottwald JM, Bird LA, Keenaghan S, Diamond C, Zampieri E, Tosodduk H, Bremner AJ, Cowie D. The Developing Bodily Self: How Posture Constrains Body Representation in Childhood. Child Dev 2020; 92:351-366. [PMID: 32767576 DOI: 10.1111/cdev.13425] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Adults' body representation is constrained by multisensory information and knowledge of the body such as its possible postures. This study (N = 180) tested for similar constraints in children. Using the rubber hand illusion with adults and 6- to 7-year olds, we measured proprioceptive drift (an index of hand localization) and ratings of felt hand ownership. The fake hand was either congruent or incongruent with the participant's own. Across ages, congruency of posture and visual-tactile congruency yielded greater drift toward the fake hand. Ownership ratings were higher with congruent visual-tactile information, but unaffected by posture. Posture constrains body representation similarly in children and adults, suggesting that children have sensitive, robust mechanisms for maintaining a sense of bodily self.
Collapse
|
31
|
Spence C. Book Review. Multisens Res 2020. [DOI: 10.1163/22134808-20201528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, Anna Watts Building, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
32
|
Travers E, Fairhurst MT, Deroy O. Racial bias in face perception is sensitive to instructions but not introspection. Conscious Cogn 2020; 83:102952. [PMID: 32505090 DOI: 10.1016/j.concog.2020.102952] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 04/15/2020] [Accepted: 05/06/2020] [Indexed: 11/19/2022]
Abstract
Faces with typically African features are perceived as darker than they really are. We investigated how early in processing the bias emerges, whether participants are aware of it, and whether it can be altered by explicit instructions. We presented pairs of faces sequentially, manipulated the luminance and morphological features of each, and asked participants which was lighter, and how confident they were in their responses. In Experiment 1, pre-response mouse cursor trajectories showed that morphology affected motor output just as early as luminance did. Furthermore, participants were not slower to respond or less confident when morphological cues drove them to give a response that conflicted with the actual luminance of the faces. However, Experiment 2 showed that participants could be instructed to reduce their reliance on morphology, even at early stages of processing. All stimuli used, code to run the experiments reported, raw data, and analyses scripts and their outputs can be found at https://osf.io/brssn.
Collapse
Affiliation(s)
- Eoin Travers
- Centre for the Study of the Senses, School of Advanced Study, University of London, UK; Institute of Cognitive Neuroscience, University College London, UK.
| | - Merle T Fairhurst
- Centre for the Study of the Senses, School of Advanced Study, University of London, UK; Munich Center for Neuroscience, Ludwig-Maximilian University, Munich, Germany
| | - Ophelia Deroy
- Centre for the Study of the Senses, School of Advanced Study, University of London, UK; Munich Center for Neuroscience, Ludwig-Maximilian University, Munich, Germany
| |
Collapse
|
33
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
34
|
La Rocca D, Ciuciu P, Engemann DA, van Wassenhove V. Emergence of β and γ networks following multisensory training. Neuroimage 2020; 206:116313. [PMID: 31676416 PMCID: PMC7355235 DOI: 10.1016/j.neuroimage.2019.116313] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/22/2019] [Accepted: 10/23/2019] [Indexed: 12/31/2022] Open
Abstract
Our perceptual reality relies on inferences about the causal structure of the world given by multiple sensory inputs. In ecological settings, multisensory events that cohere in time and space benefit inferential processes: hearing and seeing a speaker enhances speech comprehension, and the acoustic changes of flapping wings naturally pace the motion of a flock of birds. Here, we asked how a few minutes of (multi)sensory training could shape cortical interactions in a subsequent unisensory perceptual task. For this, we investigated oscillatory activity and functional connectivity as a function of individuals' sensory history during training. Human participants performed a visual motion coherence discrimination task while being recorded with magnetoencephalography. Three groups of participants performed the same task with visual stimuli only, while listening to acoustic textures temporally comodulated with the strength of visual motion coherence, or with auditory noise uncorrelated with visual motion. The functional connectivity patterns before and after training were contrasted to resting-state networks to assess the variability of common task-relevant networks, and the emergence of new functional interactions as a function of sensory history. One major finding is the emergence of a large-scale synchronization in the high γ (gamma: 60-120Hz) and β (beta: 15-30Hz) bands for individuals who underwent comodulated multisensory training. The post-training network involved prefrontal, parietal, and visual cortices. Our results suggest that the integration of evidence and decision-making strategies become more efficient following congruent multisensory training through plasticity in network routing and oscillatory regimes.
Collapse
Affiliation(s)
- Daria La Rocca
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Philippe Ciuciu
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Denis-Alexander Engemann
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Virginie van Wassenhove
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Cognitive Neuroimaging Unit, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191, Gif-sur-Yvette, France.
| |
Collapse
|
35
|
Wang QJ, Spence C. Drinking through rosé-coloured glasses: Influence of wine colour on the perception of aroma and flavour in wine experts and novices. Food Res Int 2019; 126:108678. [PMID: 31732050 DOI: 10.1016/j.foodres.2019.108678] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Revised: 09/12/2019] [Accepted: 09/13/2019] [Indexed: 10/26/2022]
Abstract
Wine colour carries a myriad of meanings regarding the provenance and expected sensory qualities of a wine. That meaning is presumably learnt through association, and part of a wine taster's skill comes from being able to decode information that can be discerned in subtle variations in the colour of the wine that they drink/evaluate. However, reliance on colour means that wine tasters, especially experts, often exhibit colour-induced olfactory biases. The present study assesses how wine colour - specifically the pink hue of rosé wines - can influence both the perceived aroma and flavour in a large sample of wine novices and experts. Participants (N = 168) tasted three wines - a white wine (W), a rosé wine (R), and the white wine dyed to match the rosé (Ŕ) - and freely selected three aroma and three flavour descriptors from a list. They also rated wine liking, flavour intensity, and description difficulty for each wine. Linguistic analysis demonstrated that those with wine tasting experience judged Ŕ to be much more similar to R than to W, even though Ŕ and W were the same. Moreover, red fruit descriptors were attributed to both R and Ŕ, especially in terms of flavour. Quantitative ratings revealed that Ŕ was liked less than W or R, and participants found it more difficult to describe Ŕ than R. These results demonstrate that while participants found the dyed rosé somehow different from the undyed wines, they nevertheless used the red fruit terms to describe its aroma and flavour. The implications of such results in terms of cognitive representations of wine and the role of sensory expectations are discussed.
Collapse
Affiliation(s)
- Qian Janice Wang
- Department of Food Science, Faculty of Science and Technology, Aarhus University, Aarslev, Denmark; Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK.
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK.
| |
Collapse
|
36
|
Kim S, Kim J. Effects of Multimodal Association on Ambiguous Perception in Binocular Rivalry. Perception 2019; 48:796-819. [DOI: 10.1177/0301006619867023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When two eyes view dissimilar images, an observer typically reports ambiguous perception called binocular rivalry where the subjective perception fluctuates between the two inputs. This perceptual instability is often comprised of exclusive dominance of each image and a transition state called piecemeal state where the two images are intermingled in patchwork manner. Herein, we investigated the effects of multimodal association of sensory congruent pair, arbitrary pair, and reverse pair on piecemeal state in order to see how each level of association affects the ambiguous perception during binocular rivalry. To induce the multisensory associations, we designed a matching task with audiovisual feedback where subjects were required to respond according to given pairing rules. We found that explicit audiovisual associations can substantially affect the piecemeal state during binocular rivalry and that this congruency effect that reduces the amount of visual ambiguity originates primarily from explicit audiovisual association training rather than common sensory features. Furthermore, when one information is associated with multiple information, recent and preexisting associations work collectively to influence the perceptual ambiguity during rivalry. Our findings show that learned multimodal association directly affects the temporal dynamics of ambiguous perception during binocular rivalry by modulating not only the exclusive dominance but also the piecemeal state in a systematic manner.
Collapse
Affiliation(s)
- Sungyong Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jeounghoon Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea; School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
37
|
Shea N, Frith CD. The Global Workspace Needs Metacognition. Trends Cogn Sci 2019; 23:560-571. [DOI: 10.1016/j.tics.2019.04.007] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 02/12/2019] [Accepted: 04/22/2019] [Indexed: 12/20/2022]
|
38
|
Spence C, Wang QJ. On the Meaning(s) of Perceived Complexity in the Chemical Senses. Chem Senses 2019; 43:451-461. [PMID: 30010729 DOI: 10.1093/chemse/bjy047] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Complexity is a term that is often invoked by those writing appreciatively about the taste, aroma/bouquet, and/or flavor of food and drink. Typically, the term is used as though everyone knows what is being talked about. Rarely is any explanation given, and the discussion soon moves on to other topics. However, oftentimes it is not at all clear what, exactly, is being referred to. A number of possibilities are outlined here, including physical complexity at the level of individual molecules, at the level of combinations of molecules giving rise to a specific flavor profile (e.g., as in a glass of quality wine or a cup of specialty coffee), at the level of combinations of distinct ingredients/elements (e.g., as when composing a particularly intricate dish in a high-end restaurant, say, or when pairing food with wine), and/or the number of stimuli/steps involved in the process of creation. Of course, people might also be referring to some aspect of their perceptual experience, and one of the intriguing questions in this space concerns the nature of the relationship(s) between these different ways of conceptualizing complexity in the chemical senses. However, given that physical/chemical and perceived complexity so often diverge, we argue that it is the latter notion, or rather inferred complexity, that is the most relevant when it comes to the chemical senses. Finally, we look at the role of expertise and review the evidence suggesting that inferred complexity can emerge either from a unitary taste experience that is judged to be complex, or from a tasting experience having multiple individuable elements.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Oxford University, Anna Watts Building, University of Oxford, Oxford, UK
| | - Qian Janice Wang
- Crossmodal Research Laboratory, Oxford University, Anna Watts Building, University of Oxford, Oxford, UK
| |
Collapse
|
39
|
Holler J, Levinson SC. Multimodal Language Processing in Human Communication. Trends Cogn Sci 2019; 23:639-652. [PMID: 31235320 DOI: 10.1016/j.tics.2019.05.006] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 05/17/2019] [Accepted: 05/21/2019] [Indexed: 11/25/2022]
Abstract
The natural ecology of human language is face-to-face interaction comprising the exchange of a plethora of multimodal signals. Trying to understand the psycholinguistic processing of language in its natural niche raises new issues, first and foremost the binding of multiple, temporally offset signals under tight time constraints posed by a turn-taking system. This might be expected to overload and slow our cognitive system, but the reverse is in fact the case. We propose cognitive mechanisms that may explain this phenomenon and call for a multimodal, situated psycholinguistic framework to unravel the full complexities of human language processing.
Collapse
Affiliation(s)
- Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
40
|
On perceptual biases in virtual object manipulation: Signal reliability and action relevance matter. Atten Percept Psychophys 2019; 81:2881-2889. [PMID: 31190312 DOI: 10.3758/s13414-019-01783-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study examined the role of visual reliability and action relevance in mutual visual-proprioceptive attraction in a virtual grasping task. Participants initially enclosed either the width or the height of a visual rectangular object with two cursors controlled by the movements of the index finger and thumb. Then, either the height or the width of this object or the distance between the fingers was judged. The judgments of object's size were attracted by the felt finger distance, and, vice versa, the judged finger distance was attracted by the size of the grasped object. The impact of the proprioceptive information on object judgments increased, whereas the impact of visual object information on finger judgments decreased when the reliability of the visual stimulus was reduced. Moreover, the proprioceptive bias decreased for the action-relevant stimulus dimension as compared with the action-irrelevant stimulus dimension. These results indicate sensory integration of spatially separated sensory signals in the absence of any direct spatial or kinematic relation between them. We therefore suggest that the basic principles of sensory integration apply to the broad research field on perceptual-motor interactions as well as to many virtual interactions with external objects.
Collapse
|
41
|
I know that "Kiki" is angular: The metacognition underlying sound-shape correspondences. Psychon Bull Rev 2019; 26:261-268. [PMID: 30097975 DOI: 10.3758/s13423-018-1516-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined the ability of people to evaluate their confidence when making perceptual judgments concerning a classic crossmodal correspondence, the Bouba/Kiki effect: People typically match the "Bouba" sound to more rounded patterns and match the "Kiki" sound to more angular patterns instead. For each visual pattern, individual participants were more confident about their own matching judgments when they happened to fall in line with the consensual response regarding whether the pattern was rated as "Bouba" or "Kiki". Logit regression analyses demonstrated that participants' confidence ratings and matching judgments were predictable by similar regression functions. This implies that the consensus and confidence underlying the Bouba/Kiki effect are underpinned by a common process, whereby perceptual features in the patterns are extracted and then used to match the sound according to rules of crossmodal correspondences. Combining both matching and confidence measures potentially allows one to explore and quantify the strength of associations in human knowledge.
Collapse
|
42
|
Filippetti ML, Kirsch LP, Crucianelli L, Fotopoulou A. Affective certainty and congruency of touch modulate the experience of the rubber hand illusion. Sci Rep 2019; 9:2635. [PMID: 30796333 PMCID: PMC6385173 DOI: 10.1038/s41598-019-38880-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 01/03/2019] [Indexed: 11/09/2022] Open
Abstract
Our sense of body ownership relies on integrating different sensations according to their temporal and spatial congruency. Nevertheless, there is ongoing controversy about the role of affective congruency during multisensory integration, i.e. whether the stimuli to be perceived by the different sensory channels are congruent or incongruent in terms of their affective quality. In the present study, we applied a widely used multisensory integration paradigm, the Rubber Hand Illusion, to investigate the role of affective, top-down aspects of sensory congruency between visual and tactile modalities in the sense of body ownership. In Experiment 1 (N = 36), we touched participants with either soft or rough fabrics in their unseen hand, while they watched a rubber hand been touched synchronously with the same fabric or with a 'hidden' fabric of 'uncertain roughness'. In Experiment 2 (N = 50), we used the same paradigm as in Experiment 1, but replaced the 'uncertainty' condition with an 'incongruent' one, in which participants saw the rubber hand being touched with a fabric of incongruent roughness and hence opposite valence. We found that certainty (Experiment 1) and congruency (Experiment 2) between the felt and vicariously perceived tactile affectivity led to higher subjective embodiment compared to uncertainty and incongruency, respectively, irrespective of any valence effect. Our results suggest that congruency in the affective top-down aspects of sensory stimulation is important to the multisensory integration process leading to embodiment, over and above temporal and spatial properties.
Collapse
Affiliation(s)
- Maria Laura Filippetti
- Centre for Brain Science, Department of Psychology, University of Essex, CO4 3SQ, Colchester, UK. .,Research Department of Clinical, Educational & Health Psychology, University College London, WC1E 7HB, London, UK.
| | - Louise P Kirsch
- Research Department of Clinical, Educational & Health Psychology, University College London, WC1E 7HB, London, UK
| | - Laura Crucianelli
- Research Department of Clinical, Educational & Health Psychology, University College London, WC1E 7HB, London, UK
| | - Aikaterini Fotopoulou
- Research Department of Clinical, Educational & Health Psychology, University College London, WC1E 7HB, London, UK
| |
Collapse
|
43
|
Crucianelli L, Paloyelis Y, Ricciardi L, Jenkinson PM, Fotopoulou A. Embodied Precision: Intranasal Oxytocin Modulates Multisensory Integration. J Cogn Neurosci 2018; 31:592-606. [PMID: 30562138 DOI: 10.1162/jocn_a_01366] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Multisensory integration processes are fundamental to our sense of self as embodied beings. Bodily illusions, such as the rubber hand illusion (RHI) and the size-weight illusion (SWI), allow us to investigate how the brain resolves conflicting multisensory evidence during perceptual inference in relation to different facets of body representation. In the RHI, synchronous tactile stimulation of a participant's hidden hand and a visible rubber hand creates illusory body ownership; in the SWI, the perceived size of the body can modulate the estimated weight of external objects. According to Bayesian models, such illusions arise as an attempt to explain the causes of multisensory perception and may reflect the attenuation of somatosensory precision, which is required to resolve perceptual hypotheses about conflicting multisensory input. Recent hypotheses propose that the precision of sensorimotor representations is determined by modulators of synaptic gain, like dopamine, acetylcholine, and oxytocin. However, these neuromodulatory hypotheses have not been tested in the context of embodied multisensory integration. The present, double-blind, placebo-controlled, crossover study ( n = 41 healthy volunteers) aimed to investigate the effect of intranasal oxytocin (IN-OT) on multisensory integration processes, tested by means of the RHI and the SWI. Results showed that IN-OT enhanced the subjective feeling of ownership in the RHI, only when synchronous tactile stimulation was involved. Furthermore, IN-OT increased an embodied version of the SWI (quantified as estimation error during a weight estimation task). These findings suggest that oxytocin might modulate processes of visuotactile multisensory integration by increasing the precision of top-down signals against bottom-up sensory input.
Collapse
|
44
|
Fairhurst MT, Travers E, Hayward V, Deroy O. Confidence is higher in touch than in vision in cases of perceptual ambiguity. Sci Rep 2018; 8:15604. [PMID: 30353061 PMCID: PMC6199278 DOI: 10.1038/s41598-018-34052-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 10/01/2018] [Indexed: 12/20/2022] Open
Abstract
The inclination to touch objects that we can see is a surprising behaviour, given that vision often supplies relevant and sufficiently accurate sensory evidence. Here we suggest that this 'fact-checking' phenomenon could be explained if touch provides a higher level of perceptual certainty than vision. Testing this hypothesis, observers explored inverted T-shaped stimuli eliciting the Vertical-horizontal illusion in vision and touch, which included clear-cut and ambiguous cases. In separate blocks, observers judged whether the vertical bar was shorter or longer than the horizontal bar and rated the confidence in their judgments. Decisions reached by vision were objectively more accurate than those reached by touch with higher overall confidence ratings. However, while confidence was higher for vision rather than for touch in clear-cut cases, observers were more confident in touch when the stimuli were ambiguous. This relative bias as a function of ambiguity qualifies the view that confidence tracks objective accuracy and uses a comparable mapping across sensory modalities. Employing a perceptual illusion, our method disentangles objective and subjective accuracy showing how the latter is tracked by confidence and point towards possible origins for 'fact checking' by touch.
Collapse
Affiliation(s)
- Merle T Fairhurst
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, UK.
- Munich Center for Neuroscience, Ludwig Maximilian University, Munich, Germany.
- Faculty of Philosophy, Ludwig Maximilian University, Munich, Germany.
| | - Eoin Travers
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, UK
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Vincent Hayward
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, UK
- Sorbonne Université, Institut des Systèmes Intelligents et de Robotique (ISIR), F-75005, Paris, France
| | - Ophelia Deroy
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, UK
- Munich Center for Neuroscience, Ludwig Maximilian University, Munich, Germany
- Faculty of Philosophy, Ludwig Maximilian University, Munich, Germany
| |
Collapse
|
45
|
Narita N, Kamiya K, Makiyama Y, Iwaki S, Komiyama O, Ishii T, Wake H. Prefrontal modulation during chewing performance in occlusal dysesthesia patients: a functional near-infrared spectroscopy study. Clin Oral Investig 2018; 23:1181-1196. [DOI: 10.1007/s00784-018-2534-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Accepted: 06/20/2018] [Indexed: 02/01/2023]
|
46
|
Osteopathic clinical reasoning: An ethnographic study of perceptual diagnostic judgments, and metacognition. INT J OSTEOPATH MED 2018. [DOI: 10.1016/j.ijosm.2018.03.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
47
|
Farashahi S, Ting CC, Kao CH, Wu SW, Soltani A. Dynamic combination of sensory and reward information under time pressure. PLoS Comput Biol 2018; 14:e1006070. [PMID: 29584717 PMCID: PMC5889192 DOI: 10.1371/journal.pcbi.1006070] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 04/06/2018] [Accepted: 03/02/2018] [Indexed: 12/03/2022] Open
Abstract
When making choices, collecting more information is beneficial but comes at the cost of sacrificing time that could be allocated to making other potentially rewarding decisions. To investigate how the brain balances these costs and benefits, we conducted a series of novel experiments in humans and simulated various computational models. Under six levels of time pressure, subjects made decisions either by integrating sensory information over time or by dynamically combining sensory and reward information over time. We found that during sensory integration, time pressure reduced performance as the deadline approached, and choice was more strongly influenced by the most recent sensory evidence. By fitting performance and reaction time with various models we found that our experimental results are more compatible with leaky integration of sensory information with an urgency signal or a decision process based on stochastic transitions between discrete states modulated by an urgency signal. When combining sensory and reward information, subjects spent less time on integration than optimally prescribed when reward decreased slowly over time, and the most recent evidence did not have the maximal influence on choice. The suboptimal pattern of reaction time was partially mitigated in an equivalent control experiment in which sensory integration over time was not required, indicating that the suboptimal response time was influenced by the perception of imperfect sensory integration. Meanwhile, during combination of sensory and reward information, performance did not drop as the deadline approached, and response time was not different between correct and incorrect trials. These results indicate a decision process different from what is involved in the integration of sensory information over time. Together, our results not only reveal limitations in sensory integration over time but also illustrate how these limitations influence dynamic combination of sensory and reward information. Collecting more information seems beneficial for making most of the decisions we face in daily life. However, the benefit of collecting more information critically depends on how well we can integrate that information over time and how costly time is. Here we investigate how humans determine the amount of time to spend on collecting sensory information in order to make a perceptual decision when the reward for making a correct choice decreases over time. We show that sensory integration over time is not perfect and further deteriorates with time pressure. However, we also find evidence that when the cost of time has to be considered, decision processes are influenced by limitations in sensory integration.
Collapse
Affiliation(s)
- Shiva Farashahi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States of America
| | - Chih-Chung Ting
- CREED, Amsterdam School of Economics, Universiteit van Amsterdam, Amsterdam, the Netherlands
| | - Chang-Hao Kao
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Shih-Wei Wu
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- * E-mail: (AS); (SWW)
| | - Alireza Soltani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States of America
- * E-mail: (AS); (SWW)
| |
Collapse
|
48
|
Behavioral, Modeling, and Electrophysiological Evidence for Supramodality in Human Metacognition. J Neurosci 2017; 38:263-277. [PMID: 28916521 DOI: 10.1523/jneurosci.0322-17.2017] [Citation(s) in RCA: 59] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 08/04/2017] [Accepted: 08/06/2017] [Indexed: 12/21/2022] Open
Abstract
Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities.SIGNIFICANCE STATEMENT Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates.
Collapse
|
49
|
Faivre N, Arzi A, Lunghi C, Salomon R. Consciousness is more than meets the eye: a call for a multisensory study of subjective experience. Neurosci Conscious 2017; 2017:nix003. [PMID: 30042838 PMCID: PMC6007148 DOI: 10.1093/nc/nix003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 02/06/2017] [Accepted: 02/16/2017] [Indexed: 11/17/2022] Open
Abstract
Over the last 30 years, our understanding of the neurocognitive bases of consciousness has improved, mostly through studies employing vision. While studying consciousness in the visual modality presents clear advantages, we believe that a comprehensive scientific account of subjective experience must not neglect other exteroceptive and interoceptive signals as well as the role of multisensory interactions for perceptual and self-consciousness. Here, we briefly review four distinct lines of work which converge in documenting how multisensory signals are processed across several levels and contents of consciousness. Namely, how multisensory interactions occur when consciousness is prevented because of perceptual manipulations (i.e. subliminal stimuli) or because of low vigilance states (i.e. sleep, anesthesia), how interactions between exteroceptive and interoceptive signals give rise to bodily self-consciousness, and how multisensory signals are combined to form metacognitive judgments. By describing the interactions between multisensory signals at the perceptual, cognitive, and metacognitive levels, we illustrate how stepping out the visual comfort zone may help in deriving refined accounts of consciousness, and may allow cancelling out idiosyncrasies of each sense to delineate supramodal mechanisms involved during consciousness.
Collapse
Affiliation(s)
- Nathan Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
- Centre d’Economie de la Sorbonne, CNRS UMR 8174, Paris, France
| | - Anat Arzi
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Claudia Lunghi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Institute of Neuroscience, National Research Council (CNR), Pisa, Italy
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
50
|
Kayser SJ, Philiastides MG, Kayser C. Sounds facilitate visual motion discrimination via the enhancement of late occipital visual representations. Neuroimage 2017; 148:31-41. [PMID: 28082107 PMCID: PMC5349847 DOI: 10.1016/j.neuroimage.2017.01.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 12/12/2016] [Accepted: 01/05/2017] [Indexed: 12/24/2022] Open
Abstract
Sensory discriminations, such as judgements about visual motion, often benefit from multisensory evidence. Despite many reports of enhanced brain activity during multisensory conditions, it remains unclear which dynamic processes implement the multisensory benefit for an upcoming decision in the human brain. Specifically, it remains difficult to attribute perceptual benefits to specific processes, such as early sensory encoding, the transformation of sensory representations into a motor response, or to more unspecific processes such as attention. We combined an audio-visual motion discrimination task with the single-trial mapping of dynamic sensory representations in EEG activity to localize when and where multisensory congruency facilitates perceptual accuracy. Our results show that a congruent sound facilitates the encoding of motion direction in occipital sensory - as opposed to parieto-frontal - cortices, and facilitates later - as opposed to early (i.e. below 100 ms) - sensory activations. This multisensory enhancement was visible as an earlier rise of motion-sensitive activity in middle-occipital regions about 350 ms from stimulus onset, which reflected the better discriminability of motion direction from brain activity and correlated with the perceptual benefit provided by congruent multisensory information. This supports a hierarchical model of multisensory integration in which the enhancement of relevant sensory cortical representations is transformed into a more accurate choice. Feature specific multisensory integration occurs in sensory not amodal cortex. Feature specific integration occurs late, i.e. around 350 ms post stimulus onset. Acoustic and visual representations interact in occipital motion regions.
Collapse
Affiliation(s)
- Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|