1
|
Retsa C, Turpin H, Geiser E, Ansermet F, Müller-Nix C, Murray MM. Longstanding Auditory Sensory and Semantic Differences in Preterm Born Children. Brain Topogr 2024; 37:536-551. [PMID: 38010487 PMCID: PMC11199270 DOI: 10.1007/s10548-023-01022-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/06/2023] [Indexed: 11/29/2023]
Abstract
More than 10% of births are preterm, and the long-term consequences on sensory and semantic processing of non-linguistic information remain poorly understood. 17 very preterm-born children (born at < 33 weeks gestational age) and 15 full-term controls were tested at 10 years old with an auditory object recognition task, while 64-channel auditory evoked potentials (AEPs) were recorded. Sounds consisted of living (animal and human vocalizations) and manmade objects (e.g. household objects, instruments, and tools). Despite similar recognition behavior, AEPs strikingly differed between full-term and preterm children. Starting at 50ms post-stimulus onset, AEPs from preterm children differed topographically from their full-term counterparts. Over the 108-224ms post-stimulus period, full-term children showed stronger AEPs in response to living objects, whereas preterm born children showed the reverse pattern; i.e. stronger AEPs in response to manmade objects. Differential brain activity between semantic categories could reliably classify children according to their preterm status. Moreover, this opposing pattern of differential responses to semantic categories of sounds was also observed in source estimations within a network of occipital, temporal and frontal regions. This study highlights how early life experience in terms of preterm birth shapes sensory and object processing later on in life.
Collapse
Affiliation(s)
- Chrysa Retsa
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland.
| | - Hélène Turpin
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Eveline Geiser
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - François Ansermet
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
- Department of Child and Adolescent Psychiatry, University Hospital, Geneva, Switzerland
| | - Carole Müller-Nix
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
3
|
Schramm M, Goregliad Fjaellingsdal T, Aslan B, Jung P, Lux S, Schulze M, Philipsen A. Electrophysiological evidence for increased auditory crossmodal activity in adult ADHD. Front Neurosci 2023; 17:1227767. [PMID: 37706153 PMCID: PMC10495991 DOI: 10.3389/fnins.2023.1227767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
Background Attention deficit and hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by core symptoms of inattention, and/or impulsivity and hyperactivity. In order to understand the basis for this multifaceted disorder, the investigation of sensory processing aberrancies recently reaches more interest. For example, during the processing of auditory stimuli comparable low sensory thresholds account for symptoms like higher distractibility and auditory hypersensitivity in patients with ADHD. It has further been shown that deficiencies not only exist on an intramodal, but also on a multimodal level. There is evidence that the visual cortex shows more activation during a focused auditory task in adults with ADHD than in healthy controls. This crossmodal activation is interpreted as the reallocation of more attentional resources to the visual domain as well as deficient sensory inhibition. In this study, we used, for the first time, electroencephalography to identify a potential abnormal regulated crossmodal activation in adult ADHD. Methods 15 adult subjects with clinically diagnosed ADHD and 14 healthy controls comparable in age and gender were included. ERP components P50, P100, N100, P200 and N200 were measured during the performance of a unimodal auditory and visual discrimination task in a block design. Sensory profiles and ADHD symptoms were assessed with inattention as well as childhood ADHD scores. For evaluating intramodal and crossmodal activations, we chose four EEG channels for statistical analysis and group-wise comparison. Results At the occipital channel O2 that reflects possible crossmodal activations, a significantly enhanced P200 amplitude was measured in the patient group. At the intramodal channels, a significantly enhanced N200 amplitude was observed in the control group. Statistical analysis of behavioral data showed poorer performance of subjects with ADHD as well as higher discrimination thresholds. Further, the correlation of the assessed sensory profiles with the EEG parameters revealed a negative correlation between the P200 component and sensation seeking behavior. Conclusion Our findings show increased auditory crossmodal activity that might reflect an altered stimulus processing resource allocation in ADHD. This might induce consequences for later, higher order attentional deployment. Further, the enhanced P200 amplitude might reflect more sensory registration and therefore deficient inhibition mechanisms in adults with ADHD.
Collapse
Affiliation(s)
- Mia Schramm
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| | - Tatiana Goregliad Fjaellingsdal
- Department of Neurology, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Behrem Aslan
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| | - Paul Jung
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| | - Silke Lux
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| | - Marcel Schulze
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany
| |
Collapse
|
4
|
Effect of Target Semantic Consistency in Different Sequence Positions and Processing Modes on T2 Recognition: Integration and Suppression Based on Cross-Modal Processing. Brain Sci 2023; 13:brainsci13020340. [PMID: 36831882 PMCID: PMC9954507 DOI: 10.3390/brainsci13020340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023] Open
Abstract
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants' recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants' judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities.
Collapse
|
5
|
Cavicchi S, De Cesarei A, Valsecchi M, Codispoti M. Visual-cortical enhancement by acoustic distractors: The effects of endogenous spatial attention and visual working memory load. Biol Psychol 2023; 177:108512. [PMID: 36724810 DOI: 10.1016/j.biopsycho.2023.108512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 01/24/2023] [Accepted: 01/27/2023] [Indexed: 01/30/2023]
Abstract
Past work has shown that when a peripheral sound captures our attention, it activates the contralateral visual cortex as revealed by an event-related potential component labelled the auditory-evoked contralateral occipital positivity (ACOP). This cross-modal activation of the visual cortex has been observed even when the sounds were not relevant to the ongoing task (visual or auditory), suggesting that peripheral sounds automatically activate the visual cortex. However, it is unclear whether top-down factors such as visual working memory (VWM) load and endogenous attention, which modulate the impact of task-irrelevant information, may modulate this spatially-specific component. Here, we asked participants to perform a lateralized VWM task (change detection), whose performance is supported by both endogenous spatial attention and VWM storage. A peripheral sound that was unrelated to the ongoing task was delivered during the retention interval. The amplitude of sound-elicited ACOP was analyzed as a function of the spatial correspondence with the cued hemifield, and of the memory array set-size. The typical ACOP modulation was observed over parieto-occipital sites in the 280-500 ms time window after sound onset. Its amplitude was not affected by VWM load but was modulated when the location of the sound did not correspond to the hemifield (right or left) that was cued for the change detection task. Our results suggest that sound-elicited activation of visual cortices, as reflected in the ACOP modulation, is unaffected by visual working memory load. However, endogenous spatial attention affects the ACOP, challenging the hypothesis that it reflects an automatic process.
Collapse
|
6
|
Brang D, Plass J, Sherman A, Stacey WC, Wasade VS, Grabowecky M, Ahn E, Towle VL, Tao JX, Wu S, Issa NP, Suzuki S. Visual cortex responds to sound onset and offset during passive listening. J Neurophysiol 2022; 127:1547-1563. [PMID: 35507478 DOI: 10.1152/jn.00164.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. Additionally, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Aleksandra Sherman
- Department of Cognitive Science, Occidental College, Los Angeles, CA, United States
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI, United States
| | | | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, IL, United States
| | - EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, IL, United States
| |
Collapse
|
7
|
Turoman N, Tivadar RI, Retsa C, Murray MM, Matusz PJ. Towards understanding how we pay attention in naturalistic visual search settings. Neuroimage 2021; 244:118556. [PMID: 34492292 DOI: 10.1016/j.neuroimage.2021.118556] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2021] [Accepted: 09/03/2021] [Indexed: 10/20/2022] Open
Abstract
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
8
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
9
|
Pedale T, Mastroberardino S, Capurso M, Bremner AJ, Spence C, Santangelo V. Crossmodal spatial distraction across the lifespan. Cognition 2021; 210:104617. [PMID: 33556891 DOI: 10.1016/j.cognition.2021.104617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 01/25/2021] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
The ability to resist distracting stimuli whilst voluntarily focusing on a task is fundamental to our everyday cognitive functioning. Here, we investigated how this ability develops, and thereafter declines, across the lifespan using a single task/experiment. Young children (5-7 years), older children (10-11 years), young adults (20-27 years), and older adults (62-86 years) were presented with complex visual scenes. Endogenous (voluntary) attention was engaged by having the participants search for a visual target presented on either the left or right side of the display. The onset of the visual scenes was preceded - at stimulus onset asynchronies (SOAs) of 50, 200, or 500 ms - by a task-irrelevant sound (an exogenous crossmodal spatial distractor) delivered either on the same or opposite side as the visual target, or simultaneously on both sides (cued, uncued, or neutral trials, respectively). Age-related differences were revealed, especially in the extreme age-groups, which showed a greater impact of crossmodal spatial distractors. Young children were highly susceptible to exogenous spatial distraction at the shortest SOA (50 ms), whereas older adults were distracted at all SOAs, showing significant exogenous capture effects during the visual search task. By contrast, older children and young adults' search performance was not significantly affected by crossmodal spatial distraction. Overall, these findings present a detailed picture of the developmental trajectory of endogenous resistance to crossmodal spatial distraction from childhood to old age and demonstrate a different efficiency in coping with distraction across the four age-groups studied.
Collapse
Affiliation(s)
- Tiziana Pedale
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
| | | | - Michele Capurso
- Department of Philosophy, Social Sciences & Education, University of Perugia, Italy
| | | | - Charles Spence
- Department of Experimental Psychology, Oxford University, UK
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy; Department of Philosophy, Social Sciences & Education, University of Perugia, Italy.
| |
Collapse
|