1
|
Baykan C, Schütz AC. Electroencephalographic Responses to the Number of Objects in Partially Occluded and Uncovered Scenes. J Cogn Neurosci 2025; 37:227-238. [PMID: 39436218 PMCID: PMC7617299 DOI: 10.1162/jocn_a_02264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2024]
Abstract
Perceptual completion is ubiquitous when estimating properties such as the shape, size, or number of objects in partially occluded scenes. Behavioral experiments showed that the number of hidden objects is underestimated in partially occluded scenes compared with an estimation based on the density of visible objects and the amount of occlusion. It is still unknown at which processing level this (under)estimation of the number of hidden objects occurs. We studied this question using a passive viewing task in which observers viewed a game board that was initially partially occluded and later was uncovered to reveal its hidden parts. We simultaneously measured the electroencephalographic responses to the partially occluded board presentation and its uncovering. We hypothesized that if the underestimation is a result of early sensory processing, it would be observed in the activities of P1 and N1, whereas if it is because of higher level processes such as expectancy, it would be reflected in P3 activities. Our data showed that P1 amplitude increased with numerosity in both occluded and uncovered states, indicating a link between P1 and simple stimulus features. The N1 amplitude was highest when both the initially visible and uncovered areas of the board were completely filled with game pieces, suggesting that the N1 component is sensitive to the overall Gestalt. Finally, we observed that P3 activity was reduced when the density of game pieces in the uncovered parts matched the initially visible parts, implying a relationship between the P3 component and expectation mismatch. Overall, our results suggest that inferences about the number of hidden items are reflected in high-level processing.
Collapse
|
2
|
Del Gatto C, Indraccolo A, Pedale T, Brunetti R. Crossmodal interference on counting performance: Evidence for shared attentional resources. PLoS One 2023; 18:e0294057. [PMID: 37948407 PMCID: PMC10637692 DOI: 10.1371/journal.pone.0294057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
During the act of counting, our perceptual system may rely on information coming from different sensory channels. However, when the information coming from different sources is discordant, such as in the case of a de-synchronization between visual stimuli to be counted and irrelevant auditory stimuli, the performance in a sequential counting task might deteriorate. Such deterioration may originate from two different mechanisms, both linked to exogenous attention attracted by auditory stimuli. Indeed, exogenous auditory triggers may infiltrate our internal "counter", interfering with the counting process, resulting in an overcount; alternatively, the exogenous auditory triggers may disrupt the internal "counter" by deviating participants' attention from the visual stimuli, resulting in an undercount. We tested these hypotheses by asking participants to count visual discs sequentially appearing on the screen while listening to task-irrelevant sounds, in systematically varied conditions: visual stimuli could be synchronized or de-synchronized with sounds; they could feature regular or irregular pacing; and their speed presentation could be fast (approx. 3/sec), moderate (approx. 2/sec), or slow (approx. 1.5/sec). Our results support the second hypothesis since participants tend to undercount visual stimuli in all harder conditions (de-synchronized, irregular, fast sequences). We discuss these results in detail, adding novel elements to the study of crossmodal interference.
Collapse
Affiliation(s)
- Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Tiziana Pedale
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
- Functional Neuroimaging Laboratory, Fondazione Santa Lucia, IRCCS, Rome, Italy
| | - Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| |
Collapse
|
3
|
Bröhl F, Keitel A, Kayser C. MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading. eNeuro 2022; 9:ENEURO.0209-22.2022. [PMID: 35728955 PMCID: PMC9239847 DOI: 10.1523/eneuro.0209-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is an intrinsically multisensory signal, and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech-derived features in source localized MEG recordings that were obtained while participants listened to speech or viewed silent speech. Using a mutual-information framework we provide a comprehensive assessment of how well temporal and occipital cortices reflect the physically presented signals and unique aspects of acoustic features that were physically absent but may be critical for comprehension. Our results demonstrate that both cortices feature a functionally specific form of multisensory restoration: during lip reading, they reflect unheard acoustic features, independent of co-existing representations of the visible lip movements. This restoration emphasizes the unheard pitch signature in occipital cortex and the speech envelope in temporal cortex and is predictive of lip-reading performance. These findings suggest that when seeing the speaker's lips, the brain engages both visual and auditory pathways to support comprehension by exploiting multisensory correspondences between lip movements and spectro-temporal acoustic cues.
Collapse
Affiliation(s)
- Felix Bröhl
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld 33615, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, United Kingdom
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld 33615, Germany
| |
Collapse
|
4
|
Billings J, Tivadar R, Murray MM, Franceschiello B, Petri G. Topological Features of Electroencephalography are Robust to Re-referencing and Preprocessing. Brain Topogr 2022; 35:79-95. [PMID: 35001322 DOI: 10.1007/s10548-021-00882-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 11/05/2021] [Indexed: 11/30/2022]
Abstract
Electroencephalography (EEG) is among the most widely diffused, inexpensive, and adopted neuroimaging techniques. Nonetheless, EEG requires measurements against a reference site(s), which is typically chosen by the experimenter, and specific pre-processing steps precede analyses. It is therefore valuable to obtain quantities that are minimally affected by reference and pre-processing choices. Here, we show that the topological structure of embedding spaces, constructed either from multi-channel EEG timeseries or from their temporal structure, are subject-specific and robust to re-referencing and pre-processing pipelines. By contrast, the shape of correlation spaces, that is, discrete spaces where each point represents an electrode and the distance between them that is in turn related to the correlation between the respective timeseries, was neither significantly subject-specific nor robust to changes of reference. Our results suggest that the shape of spaces describing the observed configurations of EEG signals holds information about the individual specificity of the underlying individual's brain dynamics, and that temporal correlations constrain to a large degree the set of possible dynamics. In turn, these encode the differences between subjects' space of resting state EEG signals. Finally, our results and proposed methodology provide tools to explore the individual topographical landscapes and how they are explored dynamically. We propose therefore to augment conventional topographic analyses with an additional-topological-level of analysis, and to consider them jointly. More generally, these results provide a roadmap for the incorporation of topological analyses within EEG pipelines.
Collapse
Affiliation(s)
- Jacob Billings
- ISI Foundation, Turin, Italy
- Department of Complex Systems, Institute for Computer Science, Czech Academy of Science, Prague, Czechia
| | - Ruxandra Tivadar
- Laboratory for Investigative Neurophysiology, Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland
| | - Micah M Murray
- Laboratory for Investigative Neurophysiology, Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland
- EEG CHUV-UNIL Section, CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Benedetta Franceschiello
- Laboratory for Investigative Neurophysiology, Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland
- EEG CHUV-UNIL Section, CIBM Center for Biomedical Imaging, Lausanne, Switzerland
| | - Giovanni Petri
- ISI Foundation, Turin, Italy.
- ISI Global Science Foundation, New York, NY, USA.
| |
Collapse
|
5
|
Canbeyli R. Sensory Stimulation Via the Visual, Auditory, Olfactory and Gustatory Systems Can Modulate Mood and Depression. Eur J Neurosci 2021; 55:244-263. [PMID: 34708453 DOI: 10.1111/ejn.15507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 10/20/2021] [Indexed: 11/28/2022]
Abstract
Depression is one of the most common mental disorders, predicted to be the leading cause of disease burden by the next decade. There is great deal of emphasis on the central origin and potential therapeutics of depression whereby the symptomatology of depression has been interpreted and treated as brain generated dysfunctions filtering down to the periphery. This top-down approach has found strong support from clinical work and basic neuroscientific research. Nevertheless, despite great advances in our knowledge of the etiology and therapeutics of depression, success in treatment is still by no means assured.. As a consequence, a wide net has been cast by both clinicians and researchers in search of more efficient therapies for mood disorders. As a complementary view, the present integrative review advocates approaching mood and depression from the opposite perspective: a bottom-up view that starts from the periphery. Specifically, evidence is provided to show that sensory stimulation via the visual, auditory, olfactory and gustatory systems can modulate depression. The review shows how -depending on several parameters- unisensory stimulation via these modalities can ameliorate or aggravate depressive symptoms. Moreover, the review emphasizes the bidirectional relationship between sensory stimulation and depression. Just as peripheral stimulation can modulate depression, depression in turn affects-and in most cases impairs-sensory reception. Furthermore, the review suggests that combined use of multisensory stimulation may have synergistic ameliorative effects on depressive symptoms over and above what has so far been documented for unisensory stimulation.
Collapse
Affiliation(s)
- Resit Canbeyli
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University
| |
Collapse
|
6
|
Perceived Loudness Sensitivity Influenced by Brightness in Urban Forests: A Comparison When Eyes Were Opened and Closed. FORESTS 2020. [DOI: 10.3390/f11121242] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Soundscape plays a positive, health-related role in urban forests, and there is a competitive allocation of cognitive resources between soundscapes and lightscapes. This study aimed to explore the relationship between perceived loudness sensitivity and brightness in urban forests through eye opening and closure. Questionnaires and measuring equipment were used to gather soundscape and lightscape information at 44 observation sites in urban forested areas. Diurnal variations, Pearson’s correlations, and formula derivations were then used to analyze the relationship between perception sensitivity and how perceived loudness sensitivity was influenced by lightscape. Our results suggested that soundscape variation plays a role in audio–visual perception in urban forests. Our findings also showed a gap in perception sensitivity between loudness and brightness, which conducted two opposite conditions bounded by 1.24 dBA. Furthermore, we found that the effect of brightness on perceived loudness sensitivity was limited if variations of brightness were sequential and weak. This can facilitate the understanding of individual perception to soundscape and lightscape in urban forests when proposing suitable design plans.
Collapse
|
7
|
Tivadar RI, Gaglianese A, Murray MM. Auditory Enhancement of Illusory Contour Perception. Multisens Res 2020; 34:1-15. [PMID: 33706283 DOI: 10.1163/22134808-bja10018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 04/24/2020] [Indexed: 11/19/2022]
Abstract
Illusory contours (ICs) are borders that are perceived in the absence of contrast gradients. Until recently, IC processes were considered exclusively visual in nature and presumed to be unaffected by information from other senses. Electrophysiological data in humans indicates that sounds can enhance IC processes. Despite cross-modal enhancement being observed at the neurophysiological level, to date there is no evidence of direct amplification of behavioural performance in IC processing by sounds. We addressed this knowledge gap. Healthy adults ( n = 15) discriminated instances when inducers were arranged to form an IC from instances when no IC was formed (NC). Inducers were low-constrast and masked, and there was continuous background acoustic noise throughout a block of trials. On half of the trials, i.e., independently of IC vs NC, a 1000-Hz tone was presented synchronously with the inducer stimuli. Sound presence improved the accuracy of indicating when an IC was presented, but had no impact on performance with NC stimuli (significant IC presence/absence × Sound presence/absence interaction). There was no evidence that this was due to general alerting or to a speed-accuracy trade-off (no main effect of sound presence on accuracy rates and no comparable significant interaction on reaction times). Moreover, sound presence increased sensitivity and reduced bias on the IC vs NC discrimination task. These results demonstrate that multisensory processes augment mid-level visual functions, exemplified by IC processes. Aside from the impact on neurobiological and computational models of vision, our findings may prove clinically beneficial for low-vision or sight-restored patients.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland
| | - Anna Gaglianese
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,3Spinoza Centre for Neuroimaging, Amsterdam, The Netherlands
| | - Micah M Murray
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland.,4Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,5Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
8
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
9
|
Individual Differences in Multisensory Interactions:The Influence of Temporal Phase Coherence and Auditory Salience on Visual Contrast Sensitivity. Vision (Basel) 2020; 4:vision4010012. [PMID: 32033350 PMCID: PMC7157667 DOI: 10.3390/vision4010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Revised: 01/21/2020] [Accepted: 01/30/2020] [Indexed: 11/16/2022] Open
Abstract
While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels. We expected the most enhanced contrast sensitivity, the lowest contrast threshold, when the visual stimulus was accompanied by a task-irrelevant sound, weak in auditory salience, modulated in-phase with the visual stimulus (strong temporal phase coherence). Our expectations were confirmed, but only if we accounted for individual differences in optimal auditory salience level to induce maximal multisensory enhancement effects. Our findings highlight the importance of interactions between temporal phase coherence and stimulus effectiveness in determining the strength of multisensory enhancement of visual contrast as well as highlighting the importance of accounting for individual differences.
Collapse
|
10
|
Brain mechanisms for perceiving illusory lines in humans. Neuroimage 2018; 181:182-189. [PMID: 30008430 DOI: 10.1016/j.neuroimage.2018.07.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 06/29/2018] [Accepted: 07/06/2018] [Indexed: 11/23/2022] Open
Abstract
Illusory contours (ICs) are perceptions of visual borders despite absent contrast gradients. The psychophysical and neurobiological mechanisms of IC processes have been studied across species and diverse brain imaging/mapping techniques. Nonetheless, debate continues regarding whether IC sensitivity results from a (presumably) feedforward process within low-level visual cortices (V1/V2) or instead are processed first within higher-order brain regions, such as lateral occipital cortices (LOC). Studies in animal models, which generally favour a feedforward mechanism within V1/V2, have typically involved stimuli inducing IC lines. By contrast, studies in humans generally favour a mechanism where IC sensitivity is mediated by LOC and have typically involved stimuli inducing IC forms or shapes. Thus, the particular stimulus features used may strongly contribute to the model of IC sensitivity supported. To address this, we recorded visual evoked potentials (VEPs) while presenting human observers with an array of 10 inducers within the central 5°, two of which could be oriented to induce an IC line on a given trial. VEPs were analysed using an electrical neuroimaging framework. Sensitivity to the presence vs. absence of centrally-presented IC lines was first apparent at ∼200 ms post-stimulus onset and was evident as topographic differences across conditions. We also localized these differences to the LOC. The timing and localization of these effects are consistent with a model of IC sensitivity commencing within higher-level visual cortices. We propose that prior observations of effects within lower-tier cortices (V1/V2) are the result of feedback from IC sensitivity that originates instead within higher-tier cortices (LOC).
Collapse
|