1
|
Serino G, Mareschal D, Scerif G, Kirkham N. Playing hide and seek: Contextual regularity learning develops between 3 and 5 years of age. J Exp Child Psychol 2024; 238:105795. [PMID: 37862788 DOI: 10.1016/j.jecp.2023.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 10/22/2023]
Abstract
The ability to acquire contextual regularities is fundamental in everyday life because it helps us to navigate the environment, directing our attention where relevant events are more likely to occur. Sensitivity to spatial regularities has been largely reported from infancy. Nevertheless, it is currently unclear when children can use this rapidly acquired contextual knowledge to guide their behavior. Evidence of this ability is indeed mixed in school-aged children and, to date, it has never been explored in younger children and toddlers. The current study investigated the development of contextual regularity learning in children aged 3 to 5 years. To this aim, we designed a new contextual learning paradigm in which young children were presented with recurring configurations of bushes and were asked to guess behind which bush a cartoon monkey was hiding. In a series of two experiments, we manipulated the relevance of color and visuospatial cues for the underlying task goal and tested how this affected young children's behavior. Our results bridge the gap between the infant and adult literatures, showing that sensitivity to spatial configurations persists from infancy to childhood, but it is only around the fifth year of life that children naturally start to integrate multiple cues to guide their behavior.
Collapse
Affiliation(s)
- Giulia Serino
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK.
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
| | - Natasha Kirkham
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| |
Collapse
|
2
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
3
|
Exploring the effectiveness of auditory, visual, and audio-visual sensory cues in a multiple object tracking environment. Atten Percept Psychophys 2022; 84:1611-1624. [PMID: 35610410 PMCID: PMC9232473 DOI: 10.3758/s13414-022-02492-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/08/2022]
Abstract
Maintaining object correspondence among multiple moving objects is an essential task of the perceptual system in many everyday life activities. A substantial body of research has confirmed that observers are able to track multiple target objects amongst identical distractors based only on their spatiotemporal information. However, naturalistic tasks typically involve the integration of information from more than one modality, and there is limited research investigating whether auditory and audio-visual cues improve tracking. In two experiments, we asked participants to track either five target objects or three versus five target objects amongst similarly indistinguishable distractor objects for 14 s. During the tracking interval, the target objects bounced occasionally against the boundary of a centralised orange circle. A visual cue, an auditory cue, neither or both coincided with these collisions. Following the motion interval, the participants were asked to indicate all target objects. Across both experiments and both set sizes, our results indicated that visual and auditory cues increased tracking accuracy although visual cues were more effective than auditory cues. Audio-visual cues, however, did not increase tracking performance beyond the level of purely visual cues for both high and low load conditions. We discuss the theoretical implications of our findings for multiple object tracking as well as for the principles of multisensory integration.
Collapse
|
4
|
Turoman N, Tivadar RI, Retsa C, Murray MM, Matusz PJ. Towards understanding how we pay attention in naturalistic visual search settings. Neuroimage 2021; 244:118556. [PMID: 34492292 DOI: 10.1016/j.neuroimage.2021.118556] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2021] [Accepted: 09/03/2021] [Indexed: 10/20/2022] Open
Abstract
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
5
|
Markant J, Amso D. Context and attention control determine whether attending to competing information helps or hinders learning in school-aged children. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2021; 13:e1577. [PMID: 34498382 DOI: 10.1002/wcs.1577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 08/05/2021] [Accepted: 08/18/2021] [Indexed: 11/07/2022]
Abstract
Attention control regulates efficient processing of goal-relevant information by suppressing interference from irrelevant competing inputs while also flexibly allocating attention across relevant inputs according to task demands. Research has established that developing attention control skills promote effective learning by minimizing distractions from task-irrelevant competing information. Additional research also suggests that competing contextual information can provide meaningful input for learning and should not always be ignored. Instead, attending to competing information that is relevant to task goals can facilitate and broaden the scope of children's learning. We review this past research examining effects of attending to task-relevant and task-irrelevant competing information on learning outcomes, focusing on relations between visual attention and learning in childhood. We then present a synthesis argument that complex interactions across learning goals, the contexts of learning environments and tasks, and developing attention control mechanisms will determine whether attending to competing information helps or hinders learning. This article is categorized under: Psychology > Attention Psychology > Learning Psychology > Development and Aging.
Collapse
Affiliation(s)
- Julie Markant
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA
- Tulane Brain Institute, Tulane University, New Orleans, Louisiana, USA
| | - Dima Amso
- Department of Psychology, Columbia University, New York, New York, USA
| |
Collapse
|
6
|
Turoman N, Tivadar RI, Retsa C, Maillard AM, Scerif G, Matusz PJ. The development of attentional control mechanisms in multisensory environments. Dev Cogn Neurosci 2021; 48:100930. [PMID: 33561691 PMCID: PMC7873372 DOI: 10.1016/j.dcn.2021.100930] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 11/26/2020] [Accepted: 01/26/2021] [Indexed: 01/02/2023] Open
Abstract
Outside the laboratory, people need to pay attention to relevant objects that are typically multisensory, but it remains poorly understood how the underlying neurocognitive mechanisms develop. We investigated when adult-like mechanisms controlling one's attentional selection of visual and multisensory objects emerge across childhood. Five-, 7-, and 9-year-olds were compared with adults in their performance on a computer game-like multisensory spatial cueing task, while 129-channel EEG was simultaneously recorded. Markers of attentional control were behavioural spatial cueing effects and the N2pc ERP component (analysed traditionally and using a multivariate electrical neuroimaging framework). In behaviour, adult-like visual attentional control was present from age 7 onwards, whereas multisensory control was absent in all children groups. In EEG, multivariate analyses of the activity over the N2pc time-window revealed stable brain activity patterns in children. Adult-like visual-attentional control EEG patterns were present age 7 onwards, while multisensory control activity patterns were found in 9-year-olds (albeit behavioural measures showed no effects). By combining rigorous yet naturalistic paradigms with multivariate signal analyses, we demonstrated that visual attentional control seems to reach an adult-like state at ∼7 years, before adult-like multisensory control, emerging at ∼9 years. These results enrich our understanding of how attention in naturalistic settings develops.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Anne M Maillard
- Service des Troubles du Spectre de l'Autisme et apparentés, Department of Psychiatry, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxfordshire, UK
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
7
|
Pedale T, Mastroberardino S, Capurso M, Bremner AJ, Spence C, Santangelo V. Crossmodal spatial distraction across the lifespan. Cognition 2021; 210:104617. [PMID: 33556891 DOI: 10.1016/j.cognition.2021.104617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 01/25/2021] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
The ability to resist distracting stimuli whilst voluntarily focusing on a task is fundamental to our everyday cognitive functioning. Here, we investigated how this ability develops, and thereafter declines, across the lifespan using a single task/experiment. Young children (5-7 years), older children (10-11 years), young adults (20-27 years), and older adults (62-86 years) were presented with complex visual scenes. Endogenous (voluntary) attention was engaged by having the participants search for a visual target presented on either the left or right side of the display. The onset of the visual scenes was preceded - at stimulus onset asynchronies (SOAs) of 50, 200, or 500 ms - by a task-irrelevant sound (an exogenous crossmodal spatial distractor) delivered either on the same or opposite side as the visual target, or simultaneously on both sides (cued, uncued, or neutral trials, respectively). Age-related differences were revealed, especially in the extreme age-groups, which showed a greater impact of crossmodal spatial distractors. Young children were highly susceptible to exogenous spatial distraction at the shortest SOA (50 ms), whereas older adults were distracted at all SOAs, showing significant exogenous capture effects during the visual search task. By contrast, older children and young adults' search performance was not significantly affected by crossmodal spatial distraction. Overall, these findings present a detailed picture of the developmental trajectory of endogenous resistance to crossmodal spatial distraction from childhood to old age and demonstrate a different efficiency in coping with distraction across the four age-groups studied.
Collapse
Affiliation(s)
- Tiziana Pedale
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
| | | | - Michele Capurso
- Department of Philosophy, Social Sciences & Education, University of Perugia, Italy
| | | | - Charles Spence
- Department of Experimental Psychology, Oxford University, UK
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy; Department of Philosophy, Social Sciences & Education, University of Perugia, Italy.
| |
Collapse
|
8
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
9
|
Denervaud S, Gentaz E, Matusz PJ, Murray MM. Multisensory Gains in Simple Detection Predict Global Cognition in Schoolchildren. Sci Rep 2020; 10:1394. [PMID: 32019951 PMCID: PMC7000735 DOI: 10.1038/s41598-020-58329-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 01/14/2020] [Indexed: 11/08/2022] Open
Abstract
The capacity to integrate information from different senses is central for coherent perception across the lifespan from infancy onwards. Later in life, multisensory processes are related to cognitive functions, such as speech or social communication. During learning, multisensory processes can in fact enhance subsequent recognition memory for unisensory objects. These benefits can even be predicted; adults' recognition memory performance is shaped by earlier responses in the same task to multisensory - but not unisensory - information. Everyday environments where learning occurs, such as classrooms, are inherently multisensory in nature. Multisensory processes may therefore scaffold healthy cognitive development. Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level cognition that is present already in schoolchildren. Multiple regression analyses indicated that the extent to which a child (N = 68; aged 4.5-15years) exhibited multisensory benefits on a simple detection task not only predicted benefits on a continuous recognition task involving naturalistic objects (p = 0.009), even when controlling for age, but also the same relative multisensory benefit also predicted working memory scores (p = 0.023) and fluid intelligence scores (p = 0.033) as measured using age-standardised test batteries. By contrast, gains in unisensory detection did not show significant prediction of any of the above global cognition measures. Our findings show that low-level multisensory processes predict higher-order memory and cognition already during childhood, even if still subject to ongoing maturation. These results call for revision of traditional models of cognitive development (and likely also education) to account for the role of multisensory processing, while also opening exciting opportunities to facilitate early learning through multisensory programs. More generally, these data suggest that a simple detection task could provide direct insights into the integrity of global cognition in schoolchildren and could be further developed as a readily-implemented and cost-effective screening tool for neurodevelopmental disorders, particularly in cases when standard neuropsychological tests are infeasible or unavailable.
Collapse
Affiliation(s)
- Solange Denervaud
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
| | - Edouard Gentaz
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Geneva, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland.
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.
| |
Collapse
|