1
|
Mood and implicit confidence independently fluctuate at different time scales. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:142-161. [PMID: 36289181 DOI: 10.3758/s13415-022-01038-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/26/2022] [Indexed: 02/15/2023]
Abstract
Mood is an important ingredient of decision-making. Human beings are immersed into a sea of emotions where episodes of high mood alternate with episodes of low mood. While changes in mood are well characterized, little is known about how these fluctuations interact with metacognition, and in particular with confidence about our decisions. We evaluated how implicit measurements of confidence are related with mood states of human participants through two online longitudinal experiments involving mood self-reports and visual discrimination decision-making tasks. Implicit confidence was assessed on each session by monitoring the proportion of opt-out trials when an opt-out option was available, as well as the median reaction time on standard correct trials as a secondary proxy of confidence. We first report a strong coupling between mood, stress, food enjoyment, and quality of sleep reported by participants in the same session. Second, we confirmed that the proportion of opt-out responses as well as reaction times in non-opt-out trials provided reliable indices of confidence in each session. We introduce a normative measure of overconfidence based on the pattern of opt-out selection and the signal-detection-theory framework. Finally and crucially, we found that mood, sleep quality, food enjoyment, and stress level are not consistently coupled with these implicit confidence markers, but rather they fluctuate at different time scales: mood-related states display faster fluctuations (over one day or half-a-day) than confidence level (two-and-a-half days). Therefore, our findings suggest that spontaneous fluctuations of mood and confidence in decision making are independent in the healthy adult population.
Collapse
|
2
|
Marucci M, Di Flumeri G, Borghini G, Sciaraffa N, Scandola M, Pavone EF, Babiloni F, Betti V, Aricò P. The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence. Sci Rep 2021; 11:4831. [PMID: 33649348 PMCID: PMC7921449 DOI: 10.1038/s41598-021-84196-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 01/07/2021] [Indexed: 01/31/2023] Open
Abstract
Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the "NASA Task Load Index" questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile-VAT and Visual-Audio-VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition.
Collapse
Affiliation(s)
- Matteo Marucci
- Department of Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185, Rome, Italy.
- Braintrends Ltd, Rome, Italy.
| | - Gianluca Di Flumeri
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Gianluca Borghini
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Nicolina Sciaraffa
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Michele Scandola
- Npsy-Lab.VR, Human Sciences Department, University of Verona, Verona, Italy
| | | | - Fabio Babiloni
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
- College Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Viviana Betti
- Department of Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
| | - Pietro Aricò
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| |
Collapse
|
3
|
van Beek FE, King RJ, Brown C, Luca MD, Keller S. Static Weight Perception Through Skin Stretch and Kinesthetic Information: Detection Thresholds, JNDs, and PSEs. IEEE TRANSACTIONS ON HAPTICS 2021; 14:20-31. [PMID: 32746382 DOI: 10.1109/toh.2020.3009599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We examined the contributions of kinesthetic and skin stretch cues to static weight perception. In three psychophysical experiments, several aspects of static weight perception were assessed by asking participants either to detect on which hand a weight was presented or to compare between two weight cues. Two closed-loop controlled haptic devices were used to present cutaneous and kinesthetic weights, in isolation and together, with a precision of 0.05 g. Our results show that combining skin stretch and kinesthetic information leads to better weight detection thresholds than presenting uni-sensory cues does. For supra-threshold stimuli, Weber fractions were 22-44%. Kinesthetic information was less reliable for lighter weights, while both sources of information were equally reliable for weights up to 300 g. Weight was perceived as equally heavy regardless of whether skin stretch and kinesthetic cues were presented together or alone. Data for lighter weights complied with an Optimal Integration model, while for heavier weights, measurements were closer to predictions from a Sensory Capture model. The presence of correlated noise might explain this discrepancy, since that would shift predictions from the Optimal Integration model towards our measurements. Our experiments provide device-independent perceptual measures, and can be used to inform, for instance, skin stretch device design.
Collapse
|
4
|
Pérez-Bellido A, Anne Barnes K, Crommett LE, Yau JM. Auditory Frequency Representations in Human Somatosensory Cortex. Cereb Cortex 2019; 28:3908-3921. [PMID: 29045579 DOI: 10.1093/cercor/bhx255] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Indexed: 01/01/2023] Open
Abstract
Recent studies have challenged the traditional notion of modality-dedicated cortical systems by showing that audition and touch evoke responses in the same sensory brain regions. While much of this work has focused on somatosensory responses in auditory regions, fewer studies have investigated sound responses and representations in somatosensory regions. In this functional magnetic resonance imaging (fMRI) study, we measured BOLD signal changes in participants performing an auditory frequency discrimination task and characterized activation patterns related to stimulus frequency using both univariate and multivariate analysis approaches. Outside of bilateral temporal lobe regions, we observed robust and frequency-specific responses to auditory stimulation in classically defined somatosensory areas. Moreover, using representational similarity analysis to define the relationships between multi-voxel activation patterns for all sound pairs, we found clear similarity patterns for auditory responses in the parietal lobe that correlated significantly with perceptual similarity judgments. Our results demonstrate that auditory frequency representations can be distributed over brain regions traditionally considered to be dedicated to somatosensation. The broad distribution of auditory and tactile responses over parietal and temporal regions reveals a number of candidate brain areas that could support general temporal frequency processing and mediate the extensive and robust perceptual interactions between audition and touch.
Collapse
Affiliation(s)
- Alexis Pérez-Bellido
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Kelly Anne Barnes
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| |
Collapse
|
5
|
Lunn J, Sjoblom A, Ward J, Soto-Faraco S, Forster S. Multisensory enhancement of attention depends on whether you are already paying attention. Cognition 2019; 187:38-49. [PMID: 30825813 DOI: 10.1016/j.cognition.2019.02.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/12/2019] [Accepted: 02/13/2019] [Indexed: 10/27/2022]
Abstract
Multisensory stimuli are argued to capture attention more effectively than unisensory stimuli due to their ability to elicit a super-additive neuronal response. However, behavioural evidence for enhanced multisensory attentional capture is mixed. Furthermore, the notion of multisensory enhancement of attention conflicts with findings suggesting that multisensory integration may itself be dependent upon top-down attention. The present research resolves this discrepancy by examining how both endogenous attentional settings and the availability of attentional capacity modulate capture by multisensory stimuli. Across a series of four studies, two measures of attentional capture were used which vary in their reliance on endogenous attention: facilitation and distraction. Perceptual load was additionally manipulated to determine whether multisensory stimuli are still able to capture attention when attention is occupied by a demanding primary task. Multisensory stimuli presented as search targets were consistently detected faster than unisensory stimuli regardless of perceptual load, although they are nevertheless subject to load modulation. In contrast, task irrelevant multisensory stimuli did not cause greater distraction than unisensory stimuli, suggesting that the enhanced attentional status of multisensory stimuli may be mediated by the availability of endogenous attention. Implications for multisensory alerts in practical settings such as driving and aviation are discussed, namely that these may be advantageous during demanding tasks, but may be less suitable to signaling unexpected events.
Collapse
|
6
|
Peripheral and central determinants of skin wetness sensing in humans. HANDBOOK OF CLINICAL NEUROLOGY 2018; 156:83-102. [PMID: 30454611 DOI: 10.1016/b978-0-444-63912-7.00005-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Evolutionarily, our ability to sense skin wetness and humidity (i.e., hygroreception) could have developed as a way of helping to maintain thermal homeostasis, as much as it is the case for the role of temperature sensation and thermoreception. Humans are not provided with a specific skin hygroreceptor, and recent studies have indicated that skin wetness is likely to be centrally processed as a result of the multisensory integration of peripheral inputs from skin thermoreceptors and mechanoreceptors coding the biophysical interactions between skin and moisture. The existence of a specific hygrosensation strategy for human wetness perception has been proposed and the first neurophysiologic model of skin wetness sensing has been recently developed. However, while these recent findings have shed light on some of the peripheral and central neural mechanisms underlying wetness sensing, our understanding of how the brain processes the thermal and mechanical inputs that give rise to one of our "most worn" skin sensory experiences is still far from being conclusive. Understanding these neural mechanisms is clinically relevant in the context of those neurologic conditions that are accompanied by somatosensory abnormalities. The present chapter will present the current knowledge on the peripheral and central determinants of skin wetness sensing in humans.
Collapse
|
7
|
Sánchez-García C, Kandel S, Savariaux C, Soto-Faraco S. The Time Course of Audio-Visual Phoneme Identification: a High Temporal Resolution Study. Multisens Res 2018; 31:57-78. [DOI: 10.1163/22134808-00002560] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 02/20/2017] [Indexed: 11/19/2022]
Abstract
Speech unfolds in time and, as a consequence, its perception requires temporal integration. Yet, studies addressing audio-visual speech processing have often overlooked this temporal aspect. Here, we address the temporal course of audio-visual speech processing in a phoneme identification task using a Gating paradigm. We created disyllabic Spanish word-like utterances (e.g., /pafa/, /paθa/, …) from high-speed camera recordings. The stimuli differed only in the middle consonant (/f/, /θ/, /s/, /r/, /g/), which varied in visual and auditory saliency. As in classical Gating tasks, the utterances were presented in fragments of increasing length (gates), here in 10 ms steps, for identification and confidence ratings. We measured correct identification as a function of time (at each gate) for each critical consonant in audio, visual and audio-visual conditions, and computed the Identification Point and Recognition Point scores. The results revealed that audio-visual identification is a time-varying process that depends on the relative strength of each modality (i.e., saliency). In some cases, audio-visual identification followed the pattern of one dominant modality (either A or V), when that modality was very salient. In other cases, both modalities contributed to identification, hence resulting in audio-visual advantage or interference with respect to unimodal conditions. Both unimodal dominance and audio-visual interaction patterns may arise within the course of identification of the same utterance, at different times. The outcome of this study suggests that audio-visual speech integration models should take into account the time-varying nature of visual and auditory saliency.
Collapse
Affiliation(s)
- Carolina Sánchez-García
- Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Barcelona, Spain
| | - Sonia Kandel
- Université Grenoble Alpes, GIPSA-lab (CNRS UMR 5216), Grenoble, France
| | | | - Salvador Soto-Faraco
- Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
8
|
Sounds can boost the awareness of visual events through attention without cross-modal integration. Sci Rep 2017; 7:41684. [PMID: 28139712 PMCID: PMC5282564 DOI: 10.1038/srep41684] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/21/2016] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Collapse
|
9
|
Filingeri D, Ackerley R. The biology of skin wetness perception and its implications in manual function and for reproducing complex somatosensory signals in neuroprosthetics. J Neurophysiol 2017; 117:1761-1775. [PMID: 28123008 DOI: 10.1152/jn.00883.2016] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 01/19/2017] [Accepted: 01/19/2017] [Indexed: 01/11/2023] Open
Abstract
Our perception of skin wetness is generated readily, yet humans have no known receptor (hygroreceptor) to signal this directly. It is easy to imagine the sensation of water running over our hands or the feel of rain on our skin. The synthetic sensation of wetness is thought to be produced from a combination of specific skin thermal and tactile inputs, registered through thermoreceptors and mechanoreceptors, respectively. The present review explores how thermal and tactile afference from the periphery can generate the percept of wetness centrally. We propose that the main signals include information about skin cooling, signaled primarily by thinly myelinated thermoreceptors, and rapid changes in touch, through fast-conducting, myelinated mechanoreceptors. Potential central sites for integration of these signals, and thus the perception of skin wetness, include the primary and secondary somatosensory cortices and the insula cortex. The interactions underlying these processes can also be modeled to aid in understanding and engineering the mechanisms. Furthermore, we discuss the role that sensing wetness could play in precision grip and the dexterous manipulation of objects. We expand on these lines of inquiry to the application of the knowledge in designing and creating skin sensory feedback in prosthetics. The addition of real-time, complex sensory signals would mark a significant advance in the use and incorporation of prosthetic body parts for amputees in everyday life.NEW & NOTEWORTHY Little is known about the underlying mechanisms that generate the perception of skin wetness. Humans have no specific hygroreceptor, and thus temperature and touch information combine to produce wetness sensations. The present review covers the potential mechanisms leading to the perception of wetness, both peripherally and centrally, along with their implications for manual function. These insights are relevant to inform the design of neuroengineering interfaces, such as sensory prostheses for amputees.
Collapse
Affiliation(s)
- Davide Filingeri
- Environmental Ergonomics Research Centre, Loughborough Design School, Loughborough University, Loughborough, United Kingdom;
| | - Rochelle Ackerley
- Department of Physiology, University of Gothenburg, Göteborg, Sweden; and.,Laboratoire Neurosciences Intégratives et Adaptatives (UMR 7260), Aix Marseille Université-Centre National de la Recherche Scientifique, Marseille, France
| |
Collapse
|
10
|
Misselhorn J, Daume J, Engel AK, Friese U. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm. Neuropsychologia 2015. [PMID: 26209356 DOI: 10.1016/j.neuropsychologia.2015.07.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed.
Collapse
Affiliation(s)
- Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany.
| | - Jonathan Daume
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|