1
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
2
|
Li J, Deng SW. Attentional focusing and filtering in multisensory categorization. Psychon Bull Rev 2024; 31:708-720. [PMID: 37673842 DOI: 10.3758/s13423-023-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 09/08/2023]
Abstract
Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau
| | - Sophia W Deng
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau.
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau.
| |
Collapse
|
3
|
Harris AM, Eayrs JO, Lavie N. Establishing gaze markers of perceptual load during multi-target visual search. Cogn Res Princ Implic 2023; 8:56. [PMID: 37648839 PMCID: PMC10468466 DOI: 10.1186/s41235-023-00498-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 06/22/2023] [Indexed: 09/01/2023] Open
Abstract
Highly-automated technologies are increasingly incorporated into existing systems, for instance in advanced car models. Although highly automated modes permit non-driving activities (e.g. internet browsing), drivers are expected to reassume control upon a 'take over' signal from the automation. To assess a person's readiness for takeover, non-invasive eye tracking can indicate their attentive state based on properties of their gaze. Perceptual load is a well-established determinant of attention and perception, however, the effects of perceptual load on a person's ability to respond to a takeover signal and the related gaze indicators are not yet known. Here we examined how load-induced attentional state affects detection of a takeover-signal proxy, as well as the gaze properties that change with attentional state, in an ongoing task with no overt behaviour beyond eye movements (responding by lingering the gaze). Participants performed a multi-target visual search of either low perceptual load (shape targets) or high perceptual load (targets were two separate conjunctions of colour and shape), while also detecting occasional auditory tones (the proxy takeover signal). Across two experiments, we found that high perceptual load was associated with poorer search performance, slower detection of cross-modal stimuli, and longer fixation durations, while saccade amplitude did not consistently change with load. Using machine learning, we were able to predict the load condition from fixation duration alone. These results suggest monitoring fixation duration may be useful in the design of systems to track users' attentional states and predict impaired user responses to stimuli outside of the focus of attention.
Collapse
Affiliation(s)
- Anthony M Harris
- Institute of Cognitive Neuroscience, University College London, London, UK.
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia.
| | - Joshua O Eayrs
- Institute of Cognitive Neuroscience, University College London, London, UK
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Nilli Lavie
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
4
|
Pereira CM, Freire MAM, Santos JR, Guimarães JS, Dias-Florencio G, Santos S, Pereira A, Ribeiro S. Non-visual exploration of novel objects increases the levels of plasticity factors in the rat primary visual cortex. PeerJ 2018; 6:e5678. [PMID: 30370183 PMCID: PMC6202959 DOI: 10.7717/peerj.5678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Accepted: 08/29/2018] [Indexed: 12/23/2022] Open
Abstract
Background Historically, the primary sensory areas of the cerebral cortex have been exclusively associated with the processing of a single sensory modality. Yet the presence of tactile responses in the primary visual (V1) cortex has challenged this view, leading to the notion that primary sensory areas engage in cross-modal processing, and that the associated circuitry is modifiable by such activity. To explore this notion, here we assessed whether the exploration of novel objects in the dark induces the activation of plasticity markers in the V1 cortex of rats. Methods Adult rats were allowed to freely explore for 20 min a completely dark box with four novel objects of different shapes and textures. Animals were euthanized either 1 (n = 5) or 3 h (n = 5) after exploration. A control group (n = 5) was placed for 20 min in the same environment, but without the objects. Frontal sections of the brains were submitted to immunohistochemistry to measure protein levels of egr-1 and c-fos, and phosphorylated calcium-dependent kinase (pCaKMII) in V1 cortex. Results The amount of neurons labeled with monoclonal antibodies against c-fos, egr-1 or pCaKMII increased significantly in V1 cortex after one hour of exploration in the dark. Three hours after exploration, the number of labeled neurons decreased to basal levels. Conclusions Our results suggest that non-visual exploration induces the activation of immediate-early genes in V1 cortex, which is suggestive of cross-modal processing in this area. Besides, the increase in the number of neurons labeled with pCaKMII may signal a condition promoting synaptic plasticity.
Collapse
Affiliation(s)
- Catia M Pereira
- Instituto Internacional de Neurociências de Natal Edmond e Lily Safra, Macaiba, RN, Brasil
| | - Marco Aurelio M Freire
- Programa de Pós-graduação em Saúde e Sociedade, Universidade do Estado do Rio Grande do Norte, Mossoró, RN, Brasil
| | - José R Santos
- Departamento de Biociências, Universidade Federal de Sergipe, Itabaiana, SE, Brasil
| | | | | | - Sharlene Santos
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN, Brasil
| | - Antonio Pereira
- Faculdade de Engenharia Elétrica, Universidade Federal do Pará, Belém, PA, Brasil
| | - Sidarta Ribeiro
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN, Brasil
| |
Collapse
|
5
|
Manfredi M, Cohn N, De Araújo Andreoli M, Boggio PS. Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative. Brain Lang 2018; 185:1-8. [PMID: 29986168 DOI: 10.1016/j.bandl.2018.06.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 06/28/2018] [Accepted: 06/28/2018] [Indexed: 06/08/2023]
Abstract
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.
Collapse
Affiliation(s)
- Mirella Manfredi
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil.
| | - Neil Cohn
- Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, Netherlands
| | - Mariana De Araújo Andreoli
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| |
Collapse
|
6
|
Barnhart WR, Rivera S, Robinson CW. Different patterns of modality dominance across development. Acta Psychol (Amst) 2018; 182:154-65. [PMID: 29179020 DOI: 10.1016/j.actpsy.2017.11.017] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 11/02/2017] [Accepted: 11/18/2017] [Indexed: 11/20/2022] Open
Abstract
The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed.
Collapse
|
7
|
Abstract
The cognitive architecture routinely relies on expectancy mechanisms to process the plausibility of stimuli and establish their sequential congruency. In two computer mouse-tracking experiments, we use a cross-modal verification task to uncover the interaction between plausibility and congruency by examining their temporal signatures of activation competition as expressed in a computer- mouse movement decision response. In this task, participants verified the content congruency of sentence and scene pairs that varied in plausibility. The order of presentation (sentence-scene, scene-sentence) was varied between participants to uncover any differential processing. Our results show that implausible but congruent stimuli triggered less accurate and slower responses than implausible and incongruent stimuli, and were associated with more complex angular mouse trajectories independent of the order of presentation. This study provides novel evidence of a disassociation between the temporal signatures of plausibility and congruency detection on decision responses.
Collapse
|
8
|
Abstract
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.
Collapse
Affiliation(s)
- Moreno I Coco
- a School of Informatics (ILCC) , University of Edinburgh , Edinburgh , UK
| | | | | |
Collapse
|