1
|
Pasqualotto A, Cochrane A, Bavelier D, Altarelli I. A novel task and methods to evaluate inter-individual variation in audio-visual associative learning. Cognition 2024; 242:105658. [PMID: 37952371 DOI: 10.1016/j.cognition.2023.105658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 10/24/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
Learning audio-visual associations is foundational to a number of real-world skills, such as reading acquisition or social communication. Characterizing individual differences in such learning has therefore been of interest to researchers in the field. Here, we present a novel audio-visual associative learning task designed to efficiently capture inter-individual differences in learning, with the added feature of using non-linguistic stimuli, so as to unconfound language and reading proficiency of the learner from their more domain-general learning capability. By fitting trial-by-trial performance in our novel learning task using simple-to-use statistical tools, we demonstrate the expected inter-individual variability in learning rate as well as high precision in its estimation. We further demonstrate that such measured learning rate is linked to working memory performance in Italian-speaking (N = 58) and French-speaking (N = 51) adults. Finally, we investigate the extent to which learning rate in our task, which measures cross-modal audio-visual associations while mitigating familiarity confounds, predicts reading ability across participants with different linguistic backgrounds. The present work thus introduces a novel non-linguistic audio-visual associative learning task that can be used across languages. In doing so, it brings a new tool to researchers in the various domains that rely on multi-sensory integration from reading to social cognition or socio-emotional learning.
Collapse
Affiliation(s)
- Angela Pasqualotto
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland
| | - Aaron Cochrane
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland
| | - Daphne Bavelier
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | |
Collapse
|
2
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
3
|
Karlsson T, Schaefer H, Barton JJS, Corrow SL. Effects of Voice and Biographic Data on Face Encoding. Brain Sci 2023; 13:brainsci13010148. [PMID: 36672128 PMCID: PMC9857090 DOI: 10.3390/brainsci13010148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/05/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
There are various perceptual and informational cues for recognizing people. How these interact in the recognition process is of interest. Our goal was to determine if the encoding of faces was enhanced by the concurrent presence of a voice, biographic data, or both. Using a between-subject design, four groups of 10 subjects learned the identities of 24 faces seen in video-clips. Half of the faces were seen only with their names, while the other half had additional information. For the first group this was the person's voice, for the second, it was biographic data, and for the third, both voice and biographic data. In a fourth control group, the additional information was the voice of a generic narrator relating non-biographic information. In the retrieval phase, subjects performed a familiarity task and then a face-to-name identification task with dynamic faces alone. Our results consistently showed no benefit to face encoding with additional information, for either the familiarity or identification task. Tests for equivalency indicated that facilitative effects of a voice or biographic data on face encoding were not likely to exceed 3% in accuracy. We conclude that face encoding is minimally influenced by cross-modal information from voices or biographic data.
Collapse
Affiliation(s)
- Thilda Karlsson
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Faculty of Medicine, Linköping University, 582 25 Linköping, Sweden
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Correspondence: ; Tel.: +604-875-4339; Fax: +604-875-4302
| | - Sherryse L. Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Department of Psychology, Bethel University, St. Paul, MN 55112, USA
| |
Collapse
|
4
|
Li Y, Wang F, Chen Y, Cichocki A, Sejnowski T. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study. Cereb Cortex 2019; 28:3623-3637. [PMID: 29029039 DOI: 10.1093/cercor/bhx235] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2017] [Indexed: 11/13/2022] Open
Abstract
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem.
Collapse
Affiliation(s)
- Yuanqing Li
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Fangyi Wang
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Yongbin Chen
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Andrzej Cichocki
- Riken Brain Science Institute, Wako shi, Japan.,Skolkovo Institute of Science and Technology (SKOTECH), Moscow, Russia
| | - Terrence Sejnowski
- Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| |
Collapse
|
5
|
Altarelli I, Dehaene-Lambertz G, Bavelier D. Individual differences in the acquisition of non-linguistic audio-visual associations in 5 year olds. Dev Sci 2019; 23:e12913. [PMID: 31608547 DOI: 10.1111/desc.12913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 08/02/2019] [Accepted: 09/27/2019] [Indexed: 11/29/2022]
Abstract
Audio-visual associative learning - at least when linguistic stimuli are employed - is known to rely on core linguistic skills such as phonological awareness. Here we ask whether this would also be the case in a task that does not manipulate linguistic information. Another question of interest is whether executive skills, often found to support learning, may play a larger role in a non-linguistic audio-visual associative task compared to a linguistic one. We present a new task that measures learning when having to associate non-linguistic auditory signals with novel visual shapes. Importantly, our novel task shares with linguistic processes such as reading acquisition the need to associate sounds with arbitrary shapes. Yet, rather than phonemes or syllables, it uses novel environmental sounds - therefore limiting direct reliance on linguistic abilities. Five-year-old French-speaking children (N = 76, 39 girls) were assessed individually in our novel audio-visual associative task, as well as in a number of other cognitive tasks evaluating linguistic abilities and executive functions. We found phonological awareness and language comprehension to be related to scores in the audio-visual associative task, while no correlation with executive functions was observed. These results underscore a key relation between foundational language competencies and audio-visual associative learning, even in the absence of linguistic input in the associative task.
Collapse
Affiliation(s)
- Irene Altarelli
- Cognitive Neuroimaging Unit U992, INSERM, CEA DRF/Institut Joliot, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France.,Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland.,CNRS UMR 8240, Laboratory for the Psychology of Child Development and Education (LaPsyDE), University Paris Descartes, Université de Paris, Paris, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit U992, INSERM, CEA DRF/Institut Joliot, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| | - Daphne Bavelier
- Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
6
|
Grouped sparse Bayesian learning for voxel selection in multivoxel pattern analysis of fMRI data. Neuroimage 2019; 184:417-430. [DOI: 10.1016/j.neuroimage.2018.09.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 09/01/2018] [Accepted: 09/12/2018] [Indexed: 11/21/2022] Open
|