1
|
Guerra-Paiva S, Mira JJ, Strametz R, Fernandes J, Klemm V, Madarasova Geckova A, Knezevic B, Potura E, Buttigieg S, Carrillo I, Sousa P. Application and Evaluation of a Multimodal Training on the Second Victim Phenomenon at the European Researchers' Network Working on Second Victims Training School: Mixed Methods Study. JMIR Form Res 2024; 8:e58727. [PMID: 39213524 DOI: 10.2196/58727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 05/14/2024] [Accepted: 06/24/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Health care workers (HCWs) are often impacted by distressing situations during patient care and can experience the second victim phenomenon (SVP). Addressing an adequate response, training, and increasing awareness of the SVP can increase HCWs' well-being and ultimately improve the quality of care and patient safety. OBJECTIVE This study aims to describe and evaluate a multimodal training organized by the European Researchers' Network Working on Second Victims to increase knowledge and overall awareness of SVP and second victim programs. METHODS We implemented a multimodal training program, following an iterative approach based on a continuous quality improvement process, to enhance the methodology and materials of the training program over the duration of 2 years. We conducted web-based surveys and group interviews to evaluate the scope and design of the training, self-directed learning materials, and face-to-face activities. RESULTS Out of 42 accepted candidates, 38 (90%) participants attended the 2 editions of the Training School program. In the second edition, the level of participants' satisfaction increased, particularly when adjusting the allocated time for the case studies' discussion (P<.001). After the multimodal training, participants stated that they had a better awareness and understanding of the SVP, support interventions, and its impact on health care. The main strengths of this Training School were the interdisciplinary approach as well as the contact with multiple cultures, the diversity of learning materials, and the commitment of the trainers and organizing team. CONCLUSIONS This multimodal training is suitable for different stakeholders of the health care community, including HCWs, clinical managers, patient safety and quality-of-care teams, academicians, researchers, and postgraduate students, regardless of their prior experience with SVP. Furthermore, this study represents a pioneering effort in elucidating the materials and methodology essential for extending this training approach to similar contexts.
Collapse
Affiliation(s)
- Sofia Guerra-Paiva
- NOVA National School of Public Health, Public Health Research Centre, Comprehensive Health Research Center, NOVA University Lisbon, Lisbon, Portugal
| | - José Joaquín Mira
- Alicante-Sant Joan Health District, Alicante, Spain
- Department of Health Psychology, Miguel Hernandez University, Elche, Spain
| | - Reinhard Strametz
- Wiesbaden Institute for Healthcare Economics and Patient Safety, RheinMain University of Applied Sciences, Wiesbaden, Germany
| | - Joana Fernandes
- NOVA National School of Public Health, NOVA University Lisbon, Lisbon, Portugal
| | - Victoria Klemm
- Wiesbaden Institute for Healthcare Economics and Patient Safety, RheinMain University of Applied Sciences, Wiesbaden, Germany
| | - Andrea Madarasova Geckova
- Department of Health Psychology and Research Methodology, Faculty of Medicine, University of Pavol Jozef Šafárik, Košice, Slovakia
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Bojana Knezevic
- Department for Quality Assurance and Improvement in Health Care, University Hospital Centre Zagreb, Zagreb, Croatia
| | - Eva Potura
- Gesundheit Österreich GmbH, Bundesinstitut für Qualität im Gesundheitswesen, Vienna, Austria
| | - Sandra Buttigieg
- Department of Health Systems Management and Leadership, Faculty of Health Sciences,University of Malta, Malta, Malta
| | - Irene Carrillo
- Department of Health Psychology, Miguel Hernandez University, Elche, Spain
| | - Paulo Sousa
- NOVA National School of Public Health, Public Health Research Centre, Comprehensive Health Research Center, NOVA University Lisbon, Lisbon, Portugal
| |
Collapse
|
2
|
Kiseleva A, Rekow D, Schaal B, Leleu A. Olfactory facilitation of visual categorization in the 4-month-old brain depends on visual demand. Dev Sci 2024:e13562. [PMID: 39188074 DOI: 10.1111/desc.13562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 08/12/2024] [Accepted: 08/14/2024] [Indexed: 08/28/2024]
Abstract
To navigate their environment, infants rely on intersensory facilitation when unisensory perceptual demand is high, a principle known as inverse effectiveness. Given that this principle was mainly documented in the context of audiovisual stimulations, here we aim to determine whether it applies to olfactory-to-visual facilitation. We build on previous evidence that the mother's body odor facilitates face categorization in the 4-month-old brain, and investigate whether this effect depends on visual demand. Scalp electroencephalogram (EEG) was recorded in two groups of 4-month-old infants while they watched 6-Hz streams of visual stimuli with faces displayed every 6th stimulus to tag a face-selective response at 1 Hz. We used variable natural stimuli in one group (Nat Group), while stimuli were simplified in the other group (Simp Group) to reduce perceptual categorization demand. During visual stimulation, infants were alternatively exposed to their mother's versus a baseline odor. For both groups, we found an occipito-temporal face-selective response, but with a larger amplitude for the simplified stimuli, reflecting less demanding visual categorization. Importantly, the mother's body odor enhances the response to natural, but not to simplified, face stimuli, indicating that maternal odor improves face categorization when it is most demanding for the 4-month-old brain. Overall, this study demonstrates that the inverse effectiveness of intersensory facilitation applies to the sense of smell during early perceptual development. RESEARCH HIGHLIGHTS: Intersensory facilitation is a function of unisensory perceptual demand in infants (inverse effectiveness). This inverse relation between multisensory and unisensory perception has been mainly documented using audiovisual stimulations. Here we show that olfactory-to-visual facilitation depends on visual demand in 4-month-old infants. The inverse effectiveness of intersensory facilitation during early perceptual development applies to the sense of smell.
Collapse
Affiliation(s)
- Anna Kiseleva
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, UBFC, CNRS, INRAe, Institut Agro, Université de Bourgogne, Dijon, France
| | - Diane Rekow
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, UBFC, CNRS, INRAe, Institut Agro, Université de Bourgogne, Dijon, France
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Benoist Schaal
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, UBFC, CNRS, INRAe, Institut Agro, Université de Bourgogne, Dijon, France
| | - Arnaud Leleu
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, UBFC, CNRS, INRAe, Institut Agro, Université de Bourgogne, Dijon, France
| |
Collapse
|
3
|
Weijs ML, Roel Lesur M, Daum MM, Lenggenhager B. Keeping up with ourselves: Multimodal processes underlying body ownership across the lifespan. Cortex 2024; 177:209-223. [PMID: 38875735 DOI: 10.1016/j.cortex.2024.05.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 05/14/2024] [Accepted: 05/20/2024] [Indexed: 06/16/2024]
Abstract
The sense of a bodily self is thought to depend on adaptive weighting and integration of bodily afferents and prior beliefs. While the physical body changes in shape, size, and functionality across the lifespan, the sense of body ownership remains relatively stable. Yet, little is known about how multimodal integration underlying such sense of ownership is altered in ontogenetic periods of substantial physical changes. We aimed to study this link for the motor and the tactile domain in a mixed-realty paradigm where participants ranging from 7 to 80 years old saw their own body with temporally mismatching multimodal signals. Participants were either stroked on their hand or moved it, while they saw it in multiple trials with different visual delays. For each trial, they judged the visuo-motor/tactile synchrony and rated the sense of ownership for the seen hand. Visual dependence and proprioceptive acuity were additionally assessed. The results show that across the lifespan body ownership decreases with increasing temporal multisensory mismatch, both in the tactile and the motor domain. We found an increased sense of ownership with increasing age independent of delay and modality. Delay sensitivity during multisensory conflicts was not consistently related to age. No effects of age were found on visual dependence or proprioceptive accuracy. The results are at least partly in line with an enhanced weighting of top-down and a reduced weighting of bottom-up signals for the momentary sense of bodily self with increasing age.
Collapse
Affiliation(s)
- Marieke L Weijs
- Department of Psychology, University of Zurich, Zurich, Switzerland; Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland.
| | - Marte Roel Lesur
- Department of Psychology, University of Zurich, Zurich, Switzerland; Department of Computer Science and Engineering, Universidad Carlos III de Madrid, Madrid, Spain; Association for Independent Research, Zurich, Switzerland
| | - Moritz M Daum
- Department of Psychology, University of Zurich, Zurich, Switzerland; Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Bigna Lenggenhager
- Department of Psychology, University of Zurich, Zurich, Switzerland; Association for Independent Research, Zurich, Switzerland.
| |
Collapse
|
4
|
Rekow D, Baudouin JY, Kiseleva A, Rossion B, Durand K, Schaal B, Leleu A. Olfactory-to-visual facilitation in the infant brain declines gradually from 4 to 12 months. Child Dev 2024. [PMID: 39022837 DOI: 10.1111/cdev.14124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
During infancy, intersensory facilitation declines gradually as unisensory perception develops. However, this trade-off was mainly investigated using audiovisual stimulations. Here, fifty 4- to 12-month-old infants (26 females, predominately White) were tested in 2017-2020 to determine whether the facilitating effect of their mother's body odor on neural face categorization, as previously observed at 4 months, decreases with age. In a baseline odor context, the results revealed a face-selective electroencephalographic response that increases and changes qualitatively between 4 and 12 months, marking improved face categorization. At the same time, the benefit of adding maternal odor fades with age (R2 = .31), indicating an inverse relation with the amplitude of the visual response, and generalizing to olfactory-visual interactions previous evidence from the audiovisual domain.
Collapse
Affiliation(s)
- Diane Rekow
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université de Bourgogne, Université Bourgogne Franche-Comté, CNRS, INRAe, Institut Agro Dijon, Dijon, France
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Jean-Yves Baudouin
- Laboratoire "Développement, Individu, Processus, Handicap, Éducation" (DIPHE), Département Psychologie du Développement, de l'Éducation et des Vulnérabilités (PsyDÉV), Institut de Psychologie, Université de Lyon (Lumière Lyon 2), Bron, France
- Institut Universitaire de France, Paris, France
| | - Anna Kiseleva
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université de Bourgogne, Université Bourgogne Franche-Comté, CNRS, INRAe, Institut Agro Dijon, Dijon, France
| | - Bruno Rossion
- Université de Lorraine, CNRS, IMoPA, Nancy, France
- Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France
| | - Karine Durand
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université de Bourgogne, Université Bourgogne Franche-Comté, CNRS, INRAe, Institut Agro Dijon, Dijon, France
| | - Benoist Schaal
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université de Bourgogne, Université Bourgogne Franche-Comté, CNRS, INRAe, Institut Agro Dijon, Dijon, France
| | - Arnaud Leleu
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université de Bourgogne, Université Bourgogne Franche-Comté, CNRS, INRAe, Institut Agro Dijon, Dijon, France
| |
Collapse
|
5
|
Décaillet M, Denervaud S, Huguenin-Virchaux C, Besuchet L, Fischer Fumeaux CJ, Murray MM, Schneider J. The impact of premature birth on auditory-visual processes in very preterm schoolchildren. NPJ SCIENCE OF LEARNING 2024; 9:42. [PMID: 38971881 PMCID: PMC11227572 DOI: 10.1038/s41539-024-00257-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 06/19/2024] [Indexed: 07/08/2024]
Abstract
Interactions between stimuli from different sensory modalities and their integration are central to daily life, contributing to improved perception. Being born prematurely and the subsequent hospitalization can have an impact not only on sensory processes, but also on the manner in which information from different senses is combined-i.e., multisensory processes. Very preterm (VPT) children (<32 weeks gestational age) present impaired multisensory processes in early childhood persisting at least through the age of five. However, it remains largely unknown whether and how these consequences persist into later childhood. Here, we evaluated the integrity of auditory-visual multisensory processes in VPT schoolchildren. VPT children (N = 28; aged 8-10 years) received a standardized cognitive assessment and performed a simple detection task at their routine follow-up appointment. The simple detection task involved pressing a button as quickly as possible upon presentation of an auditory, visual, or simultaneous audio-visual stimulus. Compared to full-term (FT) children (N = 23; aged 6-11 years), reaction times of VPT children were generally slower and more variable, regardless of sensory modality. Nonetheless, both groups exhibited multisensory facilitation on mean reaction times and inter-quartile ranges. There was no evidence that standardized cognitive or clinical measures correlated with multisensory gains of VPT children. However, while gains in FT children exceeded predictions based on probability summation and thus forcibly invoked integrative processes, this was not the case for VPT children. Our findings provide evidence of atypical multisensory profiles in VPT children persisting into school-age. These results could help in targeting supportive interventions for this vulnerable population.
Collapse
Affiliation(s)
- Marion Décaillet
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Solange Denervaud
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Cléo Huguenin-Virchaux
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Laureline Besuchet
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Céline J Fischer Fumeaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
| | - Juliane Schneider
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
6
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
7
|
Maetzler W, Geritz J, Stagneth L, Emmert K. [Interpretation of a concept for functional movement disorders from the perspective of older patients]. DER NERVENARZT 2024; 95:516-524. [PMID: 38361113 DOI: 10.1007/s00115-024-01614-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
A recently published concept considers a significant proportion of the occurrence and persistence of functional movement disorders (FMD) to be explained by increased/incorrect weighting of the expected movement (feedforward signal) in the presence of decreased/altered actual feedback of the movement. In the context of aging and age-associated diseases, there is an increased likelihood that these prerequisites will occur, also in combination. For example, the feedforward signal can be enhanced by accumulation of a wealth of experience but can for example become prone to error due to changes in attention and (fear of) falling. Conversely, the actual feedback is subject to age-related changes, such as reduction of sensory functions. This could explain why FMDs also occur in old age and offer treatment approaches for this so far poorly studied disorder. It follows that a specific focus on (the correction of) feedforward signals and strengthening as well as training of the actual feedback are potentially promising therapeutic approaches for older people with FMD.
Collapse
Affiliation(s)
- Walter Maetzler
- Klinik für Neurologie, Universitätsklinikum Schleswig-Holstein, Campus Kiel, Christian-Albrechts-Universität zu Kiel, Arnold-Heller-Str. 3, 24105, Kiel, Deutschland.
| | - Johanna Geritz
- Klinik für Neurologie, Universitätsklinikum Schleswig-Holstein, Campus Kiel, Christian-Albrechts-Universität zu Kiel, Arnold-Heller-Str. 3, 24105, Kiel, Deutschland
| | - Lina Stagneth
- Klinik für Neurologie, Universitätsklinikum Schleswig-Holstein, Campus Kiel, Christian-Albrechts-Universität zu Kiel, Arnold-Heller-Str. 3, 24105, Kiel, Deutschland
| | - Kirsten Emmert
- Klinik für Neurologie, Universitätsklinikum Schleswig-Holstein, Campus Kiel, Christian-Albrechts-Universität zu Kiel, Arnold-Heller-Str. 3, 24105, Kiel, Deutschland
| |
Collapse
|
8
|
Kong Y, Zhao C, Li D, Li B, Hu Y, Liu H, Woolgar A, Guo J, Song Y. Auditory change detection and visual selective attention: association between MMN and N2pc. Cereb Cortex 2024; 34:bhae175. [PMID: 38700440 DOI: 10.1093/cercor/bhae175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 04/02/2024] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.
Collapse
Affiliation(s)
- Yuanjun Kong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Chenguang Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Dongwei Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
- Department of Psychology, Faculty of Arts and Sciences, Beijing Normal University at Zhuhai, 18 Jinfeng Road, Zhuhai 519087, China
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Bingkun Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Yiqing Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Hongyu Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Alexandra Woolgar
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Jialiang Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| |
Collapse
|
9
|
Marsicano G, Bertini C, Ronconi L. Alpha-band sensory entrainment improves audiovisual temporal acuity. Psychon Bull Rev 2024; 31:874-885. [PMID: 37783899 DOI: 10.3758/s13423-023-02388-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2023] [Indexed: 10/04/2023]
Abstract
Visual and auditory stimuli are transmitted from the environment to sensory cortices with different timing, requiring the brain to encode when sensory inputs must be segregated or integrated into a single percept. The probability that different audiovisual (AV) stimuli are integrated into a single percept even when presented asynchronously is reflected in the construct of temporal binding window (TBW). There is a strong interest in testing whether it is possible to broaden or shrink TBW by using different neuromodulatory approaches that can speed up or slow down ongoing alpha oscillations, which have been repeatedly hypothesized to be an important determinant of the TBWs size. Here, we employed a web-based sensory entrainment protocol combined with a simultaneity judgment task using simple flash-beep stimuli. The aim was to test whether AV temporal acuity could be modulated trial by trial by synchronizing ongoing neural oscillations in the prestimulus period to a rhythmic sensory stream presented in the upper (∼12 Hz) or lower (∼8.5 Hz) alpha range. As a control, we implemented a nonrhythmic condition where only the first and the last entrainers were employed. Results show that upper alpha entrainment shrinks AV TBW and improves AV temporal acuity when compared with lower alpha and control conditions. Our findings represent a proof of concept of the efficacy of sensory entrainment to improve AV temporal acuity in a trial-by-trial manner, and they strengthen the idea that alpha oscillations may reflect the temporal unit of AV temporal binding.
Collapse
Affiliation(s)
- Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Via Olgettina 58, 20132, Milan, Italy.
- Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
10
|
Carnevali L, Della Longa L, Dragovic D, Farroni T. Touch and look: The role of affective touch in promoting infants' attention towards complex visual scenes. INFANCY 2024; 29:271-283. [PMID: 38180744 DOI: 10.1111/infa.12580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Abstract
In a complex social environment, stimuli from different sensory modalities need to be integrated to decode communicative meanings. From very early in life, infants have to combine a multitude of sensory features with social and affective attributes. Of all senses, touch constitutes a privileged channel to carry affective-motivational meanings and foster social connection. In the present study, we investigate whether sharing sensory stimulation that varies for its affective value differentially affects infants' attention towards visual stimuli. 6 to 11-month-old infants (N = 42) were familiarized with two characters respectively matched with tactile (affective or non-affective) and auditory stimulation; then repeatedly exposed to scenes where the two characters moved towards target objects. Our results showed a main effect of stimulation (sound vs. touch) on looking times during familiarization, with longer looking times when sound is provided. During scenes presentation, a main effect of the type of touch (affective vs. non affective) emerged, with longer looking times in infants that previously experienced affective touch, suggesting that this sensory experience may critically engage the self and modulate infant attention. Overall, these findings suggest that while sound acts as attention getter, affective touch supports sustained attention towards complex visual scenes beyond the stimulation period itself.
Collapse
Affiliation(s)
- Laura Carnevali
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | - Letizia Della Longa
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | - Danica Dragovic
- Pediatric Unit, San Polo Hospital, Azienda Sanitaria Universitaria Giuliano-Isontina (ASUGI), Monfalcone, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| |
Collapse
|
11
|
Nava E, Giraud M, Bolognini N. The emergence of the multisensory brain: From the womb to the first steps. iScience 2024; 27:108758. [PMID: 38230260 PMCID: PMC10790096 DOI: 10.1016/j.isci.2023.108758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024] Open
Abstract
The becoming of the human being is a multisensory process that starts in the womb. By integrating spontaneous neuronal activity with inputs from the external world, the developing brain learns to make sense of itself through multiple sensory experiences. Over the past ten years, advances in neuroimaging and electrophysiological techniques have allowed the exploration of the neural correlates of multisensory processing in the newborn and infant brain, thus adding an important piece of information to behavioral evidence of early sensitivity to multisensory events. Here, we review recent behavioral and neuroimaging findings to document the origins and early development of multisensory processing, particularly showing that the human brain appears naturally tuned to multisensory events at birth, which requires multisensory experience to fully mature. We conclude the review by highlighting the potential uses and benefits of multisensory interventions in promoting healthy development by discussing emerging studies in preterm infants.
Collapse
Affiliation(s)
- Elena Nava
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Michelle Giraud
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
12
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
13
|
Wang L, Lin L, Ren J. The characteristics of audiovisual temporal integration in streaming-bouncing bistable motion perception: considering both implicit and explicit processing perspectives. Cereb Cortex 2023; 33:11541-11555. [PMID: 37874024 DOI: 10.1093/cercor/bhad388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/25/2023] Open
Abstract
This study explored the behavioral and neural activity characteristics of audiovisual temporal integration in motion perception from both implicit and explicit perspectives. The streaming-bouncing bistable paradigm (SB task) was employed to investigate implicit temporal integration, while the corresponding simultaneity judgment task (SJ task) was used to examine explicit temporal integration. The behavioral results revealed a negative correlation between implicit and explicit temporal processing. In the ERP results of both tasks, three neural phases (PD100, ND180, and PD290) in the fronto-central region were identified as reflecting integration effects and the auditory-evoked multisensory N1 component may serve as a primary component responsible for cross-modal temporal processing. However, there were significant differences between the VA ERPs in the SB and SJ tasks and the influence of speed on implicit and explicit integration effects also varied. The aforementioned results, building upon the validation of previous temporal renormalization theory, suggest that implicit and explicit temporal integration operate under distinct processing modes within a shared neural network. This underscores the brain's flexibility and adaptability in cross-modal temporal processing.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| |
Collapse
|
14
|
Cavallo A, Casartelli L. Is rich behavior the solution or just a (relevant) piece of the puzzle?: Comment on "Beyond simple laboratory studies: Developing sophisticated models to study rich behavior" by Maselli, Gordon, Eluchans, Lancia, Thiery, Moretti, Cisek, and Pezzulo. Phys Life Rev 2023; 47:186-188. [PMID: 37926019 DOI: 10.1016/j.plrev.2023.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 10/17/2023] [Indexed: 11/07/2023]
Affiliation(s)
- Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Bosisio Parini (LC), Italy
| |
Collapse
|
15
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
16
|
Sun X, Fu Q. The Visual Advantage Effect in Comparing Uni-Modal and Cross-Modal Probabilistic Category Learning. J Intell 2023; 11:218. [PMID: 38132836 PMCID: PMC10744040 DOI: 10.3390/jintelligence11120218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/14/2023] [Accepted: 11/23/2023] [Indexed: 12/23/2023] Open
Abstract
People rely on multiple learning systems to complete weather prediction (WP) tasks with visual cues. However, how people perform in audio and audiovisual modalities remains elusive. The present research investigated how the cue modality influences performance in probabilistic category learning and conscious awareness about the category knowledge acquired. A modified weather prediction task was adopted, in which the cues included two dimensions from visual, auditory, or audiovisual modalities. The results of all three experiments revealed better performances in the visual modality relative to the audio and audiovisual modalities. Moreover, participants primarily acquired unconscious knowledge in the audio and audiovisual modalities, while conscious knowledge was acquired in the visual modality. Interestingly, factors such as the amount of training, the complexity of visual stimuli, and the number of objects to which the two cues belonged influenced the amount of conscious knowledge acquired but did not change the visual advantage effect. These findings suggest that individuals can learn probabilistic cues and category associations across different modalities, but a robust visual advantage persists. Specifically, visual associations can be learned more effectively, and are more likely to become conscious. The possible causes and implications of these effects are discussed.
Collapse
Affiliation(s)
- Xunwei Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100083, China
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing 100080, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100083, China
| |
Collapse
|
17
|
Neel ML, Jeanvoine A, Key A, Stark AR, Norton ES, Relland LM, Hay K, Maitre NL. Behavioral and neural measures of infant responsivity increase with maternal multisensory input in non-irritable infants. Brain Behav 2023; 13:e3253. [PMID: 37786238 PMCID: PMC10636412 DOI: 10.1002/brb3.3253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 08/29/2023] [Accepted: 09/06/2023] [Indexed: 10/04/2023] Open
Abstract
INTRODUCTION Parents often use sensory stimulation during early-life interactions with infants. These interactions, including gazing, rocking, or singing, scaffold child development. Previous studies have examined infant neural processing during highly controlled sensory stimulus presentation paradigms. OBJECTIVE In this study, we investigated infant behavioral and neural responsiveness during a mother-child social interaction during which the mother provided infant stimulation with a progressive increase in the number of sensory modalities. METHODS We prospectively collected and analyzed video-coded behavioral interactions and electroencephalogram (EEG) frontal asymmetry (FAS) from infants (n = 60) at 2-4 months born at ≥ 34 weeks gestation. As the number of sensory modalities progressively increased during the interaction, infant behaviors of emotional connection in facial expressiveness, sensitivity to mother, and vocal communication increased significantly. Conversely, infant FAS for the entire cohort did not change significantly. However, when we accounted for infant irritability, both video-coded behaviors and EEG FAS markers of infant responsiveness increased across the interaction in the non-irritable infants. The non-irritable infants (49%) demonstrated positive FAS, indicating readiness to engage with, rather than to withdraw from, multisensory but not unisensory interactions with their mothers. RESULTS These results suggest that multisensory input from mothers is associated with greater infant neural approach state and highlight the importance of infant behavioral state during neural measures of infant responsiveness.
Collapse
Affiliation(s)
- Mary Lauren Neel
- Department of Pediatrics & NeonatologyEmory University School of Medicine & Children's Healthcare of AtlantaAtlanta, GAUSA
| | - Arnaud Jeanvoine
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
| | | | - Ann R. Stark
- Department of Pediatrics & NeonatologyBeth Israel Deaconess Medical Center & Harvard Medical SchoolBoston, MAUSA
| | | | - Lance M. Relland
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
- Department of Anesthesiology & Pain MedicineNationwide Children's Hospital & The Ohio State UniversityColumbus, OHUSA
| | - Krystal Hay
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
| | - Nathalie L. Maitre
- Department of Pediatrics & NeonatologyEmory University School of Medicine & Children's Healthcare of AtlantaAtlanta, GAUSA
| |
Collapse
|
18
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
19
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
20
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
21
|
Trujillo JP, Holler J. Interactionally Embedded Gestalt Principles of Multimodal Human Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1136-1159. [PMID: 36634318 PMCID: PMC10475215 DOI: 10.1177/17456916221141422] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
22
|
Williams ZJ, Schaaf R, Ausderau KK, Baranek GT, Barrett DJ, Cascio CJ, Dumont RL, Eyoh EE, Failla MD, Feldman JI, Foss-Feig JH, Green HL, Green SA, He JL, Kaplan-Kahn EA, Keçeli-Kaysılı B, MacLennan K, Mailloux Z, Marco EJ, Mash LE, McKernan EP, Molholm S, Mostofsky SH, Puts NAJ, Robertson CE, Russo N, Shea N, Sideris J, Sutcliffe JS, Tavassoli T, Wallace MT, Wodka EL, Woynaroski TG. Examining the latent structure and correlates of sensory reactivity in autism: a multi-site integrative data analysis by the autism sensory research consortium. Mol Autism 2023; 14:31. [PMID: 37635263 PMCID: PMC10464466 DOI: 10.1186/s13229-023-00563-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/11/2023] [Indexed: 08/29/2023] Open
Abstract
BACKGROUND Differences in responding to sensory stimuli, including sensory hyperreactivity (HYPER), hyporeactivity (HYPO), and sensory seeking (SEEK) have been observed in autistic individuals across sensory modalities, but few studies have examined the structure of these "supra-modal" traits in the autistic population. METHODS Leveraging a combined sample of 3868 autistic youth drawn from 12 distinct data sources (ages 3-18 years and representing the full range of cognitive ability), the current study used modern psychometric and meta-analytic techniques to interrogate the latent structure and correlates of caregiver-reported HYPER, HYPO, and SEEK within and across sensory modalities. Bifactor statistical indices were used to both evaluate the strength of a "general response pattern" factor for each supra-modal construct and determine the added value of "modality-specific response pattern" scores (e.g., Visual HYPER). Bayesian random-effects integrative data analysis models were used to examine the clinical and demographic correlates of all interpretable HYPER, HYPO, and SEEK (sub)constructs. RESULTS All modality-specific HYPER subconstructs could be reliably and validly measured, whereas certain modality-specific HYPO and SEEK subconstructs were psychometrically inadequate when measured using existing items. Bifactor analyses supported the validity of a supra-modal HYPER construct (ωH = .800) but not a supra-modal HYPO construct (ωH = .653), and supra-modal SEEK models suggested a more limited version of the construct that excluded some sensory modalities (ωH = .800; 4/7 modalities). Modality-specific subscales demonstrated significant added value for all response patterns. Meta-analytic correlations varied by construct, although sensory features tended to correlate most with other domains of core autism features and co-occurring psychiatric symptoms (with general HYPER and speech HYPO demonstrating the largest numbers of practically significant correlations). LIMITATIONS Conclusions may not be generalizable beyond the specific pool of items used in the current study, which was limited to caregiver report of observable behaviors and excluded multisensory items that reflect many "real-world" sensory experiences. CONCLUSION Of the three sensory response patterns, only HYPER demonstrated sufficient evidence for valid interpretation at the supra-modal level, whereas supra-modal HYPO/SEEK constructs demonstrated substantial psychometric limitations. For clinicians and researchers seeking to characterize sensory reactivity in autism, modality-specific response pattern scores may represent viable alternatives that overcome many of these limitations.
Collapse
Affiliation(s)
- Zachary J Williams
- Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, South Tower, Room 8310, Nashville, TN, 37232, USA.
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN, USA.
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Roseann Schaaf
- Department of Occupational Therapy, College of Rehabilitation Sciences, Thomas Jefferson University, Philadelphia, PA, USA
- Jefferson Autism Center of Excellence, Farber Institute of Neuroscience, Thomas Jefferson University, Philadelphia, PA, USA
| | - Karla K Ausderau
- Department of Kinesiology, Occupational Therapy Program, University of Wisconsin-Madison, Madison, WI, USA
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Grace T Baranek
- Mrs. T.H. Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
| | - D Jonah Barrett
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Carissa J Cascio
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Rachel L Dumont
- Department of Occupational Therapy, College of Rehabilitation Sciences, Thomas Jefferson University, Philadelphia, PA, USA
| | - Ekomobong E Eyoh
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
| | | | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, South Tower, Room 8310, Nashville, TN, 37232, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN, USA
| | - Jennifer H Foss-Feig
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Heather L Green
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Shulamite A Green
- Department of Psychiatry and Biobehavioral Sciences, University of California - Los Angeles, Los Angeles, CA, USA
| | - Jason L He
- Department of Forensic and Neurodevelopmental Sciences, Sackler Institute for Translational Neurodevelopment, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| | - Elizabeth A Kaplan-Kahn
- Department of Psychology, Syracuse University, Syracuse, NY, USA
- Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Bahar Keçeli-Kaysılı
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, South Tower, Room 8310, Nashville, TN, 37232, USA
| | - Keren MacLennan
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- Department of Psychology, Durham University, Durham, UK
| | - Zoe Mailloux
- Department of Occupational Therapy, College of Rehabilitation Sciences, Thomas Jefferson University, Philadelphia, PA, USA
| | - Elysa J Marco
- Department of Neurodevelopmental Medicine, Cortica Healthcare, San Rafael, CA, USA
| | - Lisa E Mash
- Division of Psychology, Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth P McKernan
- Department of Psychology, Syracuse University, Syracuse, NY, USA
- Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sophie Molholm
- Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Stewart H Mostofsky
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, MD, USA
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Psychiatry and Behavioral Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicolaas A J Puts
- Department of Forensic and Neurodevelopmental Sciences, Sackler Institute for Translational Neurodevelopment, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
- MRC Centre for Neurodevelopmental Disorders, King's College London, London, UK
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Natalie Russo
- Department of Psychology, Syracuse University, Syracuse, NY, USA
| | - Nicole Shea
- Department of Psychology, Syracuse University, Syracuse, NY, USA
- Division of Pulmonology and Sleep Medicine, Department of Pediatrics, Kaleida Health, Buffalo, NY, USA
| | - John Sideris
- Mrs. T.H. Chan Division of Occupational Science and Occupational Therapy, University of Southern California, Los Angeles, CA, USA
| | - James S Sutcliffe
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Molecular Physiology and Biophysics, Vanderbilt University, Nashville, TN, USA
| | - Teresa Tavassoli
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Ericka L Wodka
- Department of Psychiatry and Behavioral Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Center for Autism and Related Disorders, Kennedy Krieger Institute, Baltimore, MD, USA
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, South Tower, Room 8310, Nashville, TN, 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Communication Sciences and Disorders, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI, USA
| |
Collapse
|
23
|
Vogt K. Neuroscience: Merging multisensory memories. Curr Biol 2023; 33:R817-R819. [PMID: 37552950 DOI: 10.1016/j.cub.2023.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/10/2023]
Abstract
How animals form and retain memories across multiple sensory modalities and how multisensory learning can enhance memory is largely unknown. A recent study sheds light on the neural mechanism underlying multisensory memory convergence in the Drosophila melanogaster brain.
Collapse
Affiliation(s)
- Katrin Vogt
- Department of Biology, University of Konstanz, Konstanz, Germany; Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz, Germany.
| |
Collapse
|
24
|
Wilson KM, Arquilla AM, Saltzman W. The parental umwelt: Effects of parenthood on sensory processing in rodents. J Neuroendocrinol 2023; 35:e13237. [PMID: 36792373 DOI: 10.1111/jne.13237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
An animal's umwelt, comprising its perception of the sensory environment, which is inherently subjective, can change across the lifespan in accordance with major life events. In mammals, the onset of motherhood, in particular, is associated with a neural and sensory plasticity that alters a mother's detection and use of sensory information such as infant-related sensory stimuli. Although the literature surrounding mammalian mothers is well established, very few studies have addressed the effects of parenthood on sensory plasticity in mammalian fathers. In this review, we summarize the major findings on the effects of parenthood on behavioural and neural responses to sensory stimuli from pups in rodent mothers, with a focus on the olfactory, auditory, and somatosensory systems, as well as multisensory integration. We also review the available literature on sensory plasticity in rodent fathers. Finally, we discuss the importance of sensory plasticity for effective parental care, hormonal modulation of plasticity, and an exploration of temporal, ecological, and life-history considerations of sensory plasticity associated with parenthood. The changes in processing and/or perception of sensory stimuli associated with the onset of parental care may have both transient and long-lasting effects on parental behaviour and cognition in both mothers and fathers; as such, several promising areas of study, such as on the molecular/genetic, neurochemical, and experiential underpinnings of parenthood-related sensory plasticity, as well as determinants of interspecific variation, remain potential avenues for further exploration.
Collapse
Affiliation(s)
- Kerianne M Wilson
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
- Department of Biology, Pomona College, Claremont, CA, USA
| | - April M Arquilla
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
| | - Wendy Saltzman
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
- Neuroscience Graduate Program, University of California, Riverside, CA, USA
| |
Collapse
|
25
|
Pulliam G, Feldman JI, Woynaroski TG. Audiovisual multisensory integration in individuals with reading and language impairments: A systematic review and meta-analysis. Neurosci Biobehav Rev 2023; 149:105130. [PMID: 36933815 PMCID: PMC10243286 DOI: 10.1016/j.neubiorev.2023.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023]
Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA
| | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; John A. Burns School of Medicine, University of Hawaii, Manoa, HI, USA
| |
Collapse
|
26
|
Murray CA, Shams L. Crossmodal interactions in human learning and memory. Front Hum Neurosci 2023; 17:1181760. [PMID: 37266327 PMCID: PMC10229776 DOI: 10.3389/fnhum.2023.1181760] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Most studies of memory and perceptual learning in humans have employed unisensory settings to simplify the study paradigm. However, in daily life we are often surrounded by complex and cluttered scenes made up of many objects and sources of sensory stimulation. Our experiences are, therefore, highly multisensory both when passively observing the world and when acting and navigating. We argue that human learning and memory systems are evolved to operate under these multisensory and dynamic conditions. The nervous system exploits the rich array of sensory inputs in this process, is sensitive to the relationship between the sensory inputs, and continuously updates sensory representations, and encodes memory traces based on the relationship between the senses. We review some recent findings that demonstrate a range of human learning and memory phenomena in which the interactions between visual and auditory modalities play an important role, and suggest possible neural mechanisms that can underlie some surprising recent findings. We outline open questions as well as directions of future research to unravel human perceptual learning and memory.
Collapse
Affiliation(s)
- Carolyn A. Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Bioengineering, Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
27
|
Setti A, Hernández B, Hirst RJ, Donoghue OA, Kenny RA, Newell FN. Susceptibility to the sound-induced flash illusion is associated with gait speed in a large sample of middle-aged and older adults. Exp Gerontol 2023; 174:112113. [PMID: 36736711 DOI: 10.1016/j.exger.2023.112113] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 01/18/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND Multisensory integration is the ability to appropriately merge information from different senses for the purpose of perceiving and acting in the environment. During walking, information from multiple senses must be integrated appropriately to coordinate effective movements. We tested the association between a well characterised multisensory task, the Sound-Induced Flash Illusion (SIFI), and gait speed in 3255 participants from The Irish Longitudinal Study on Ageing. High susceptibility to this illusion at longer stimulus onset asynchronies characterises older adults, and has been associated with cognitive and functional impairments, therefore it should be associated with slower gait speed. METHOD Gait was measured under three conditions; usual pace, cognitive dual tasking, and maximal walking speed. A separate logistic mixed effects regression model was run for 1) gait at usual pace, 2) change in gait speed for the cognitive dual tasking relative to usual pace and 3) change in maximal walking speed relative to usual pace. In all cases a binary response indicating a correct/incorrect response to each SIFI trial was the dependent variable. The model controlled for covariates including age, sex, education, vision and hearing abilities, Body Mass Index, and cognitive function. RESULTS Slower gait was associated with more illusions, particularly at longer temporal intervals between the flash-beep pair and the second beep, indicating that those who integrated incongruent sensory inputs over longer intervals, also walked slower. The relative changes in gait speed for cognitive dual tasking and maximal walking speed were also significantly associated with SIFI at longer SOAs. CONCLUSIONS These findings support growing evidence that mobility, susceptibility to falling and balance control are associated with multisensory processing in ageing.
Collapse
Affiliation(s)
- Annalisa Setti
- School of Applied Psychology, University College Cork, Cork, Ireland; The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Belinda Hernández
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Rebecca J Hirst
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| | - Orna A Donoghue
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study in Ageing, Trinity College Dublin, Dublin, Ireland; Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland; Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
28
|
Domenici N, Sanguineti V, Morerio P, Campus C, Del Bue A, Gori M, Murino V. Computational modeling of human multisensory spatial representation by a neural architecture. PLoS One 2023; 18:e0280987. [PMID: 36888612 PMCID: PMC9994749 DOI: 10.1371/journal.pone.0280987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 01/12/2023] [Indexed: 03/09/2023] Open
Abstract
Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.
Collapse
Affiliation(s)
- Nicola Domenici
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Genova, Genoa, Italy
| | - Valentina Sanguineti
- University of Genova, Genoa, Italy
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Pietro Morerio
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Alessio Del Bue
- Visual Geometry and Modelling, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Vittorio Murino
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Verona, Verona, Italy
- Huawei Technologies Ltd., Ireland Research Center, Dublin, Ireland
| |
Collapse
|
29
|
Shapiro L, Hobbs E, Keenan ID. Transforming musculoskeletal anatomy learning with haptic surface painting. ANATOMICAL SCIENCES EDUCATION 2023. [PMID: 36748362 DOI: 10.1002/ase.2262] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 02/03/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
Anatomical body painting has traditionally been utilized to support learner engagement and understanding of surface anatomy. Learners apply two-dimensional representations of surface markings directly on to the skin, based on the identification of key landmarks. Esthetically satisfying representations of musculature and viscera can also be created. However, established body painting approaches do not typically address three-dimensional spatial anatomical concepts. Haptic Surface Painting (HSP) is a novel activity, distinct from traditional body painting, and aims to develop learner spatial awareness. The HSP process is underpinned by previous work describing how a Haptico-visual observation and drawing method can support spatial, holistic, and collaborative anatomy learning. In HSP, superficial and underlying musculoskeletal and vascular structures are located haptically by palpation. Transparent colors are then immediately applied to the skin using purposive and cross-contour drawing techniques to produce corresponding visual representations of learner observation and cognition. Undergraduate students at a United Kingdom medical school (n = 7) participated in remote HSP workshops and focus groups. A phenomenological study of learner perspectives identified four themes from semantic qualitative analysis of transcripts: Three-dimensional haptico-visual exploration relating to learner spatial awareness of their own anatomy; cognitive freedom and accessibility provided by a flexible and empowering learning process; altered perspectives of anatomical detail, relationships, and clinical relevance; and delivery and context, relating to curricular integration, session format, and educator guidance. This work expands the pedagogic repertoire of anatomical body painting and has implications for anatomy educators seeking to integrate innovative, engaging, and effective learning approaches for transforming student learning.
Collapse
Affiliation(s)
- Leonard Shapiro
- Division of Clinical Anatomy and Biological Anthropology, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
| | - Ella Hobbs
- School of Medicine, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Iain D Keenan
- School of Medicine, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
30
|
Brain Macro-Structural Alterations in Aging Rats: A Longitudinal Lifetime Approach. Cells 2023; 12:cells12030432. [PMID: 36766774 PMCID: PMC9914014 DOI: 10.3390/cells12030432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/25/2023] [Accepted: 01/26/2023] [Indexed: 02/03/2023] Open
Abstract
Aging is accompanied by macro-structural alterations in the brain that may relate to age-associated cognitive decline. Animal studies could allow us to study this relationship, but so far it remains unclear whether their structural aging patterns correspond to those in humans. Therefore, by applying magnetic resonance imaging (MRI) and deformation-based morphometry (DBM), we longitudinally screened the brains of male RccHan:WIST rats for structural changes across their average lifespan. By combining dedicated region of interest (ROI) and voxel-wise approaches, we observed an increase in their global brain volume that was superimposed by divergent local morphologic alterations, with the largest aging effects in early and middle life. We detected a modality-dependent vulnerability to shrinkage across the visual, auditory, and somato-sensory cortical areas, whereas the piriform cortex showed partial resistance. Furthermore, shrinkage emerged in the amygdala, subiculum, and flocculus as well as in frontal, parietal, and motor cortical areas. Strikingly, we noticed the preservation of ectorhinal, entorhinal, retrosplenial, and cingulate cortical regions, which all represent higher-order brain areas and extraordinarily grew with increasing age. We think that the findings of this study will further advance aging research and may contribute to the establishment of interventional approaches to preserve cognitive health in advanced age.
Collapse
|
31
|
Triplett JW, Rowland BA, Reber M. Editorial: Development and plasticity of multisensory circuits. Front Neural Circuits 2023; 16:1129196. [PMID: 36712836 PMCID: PMC9880465 DOI: 10.3389/fncir.2022.1129196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 01/15/2023] Open
Affiliation(s)
- Jason W. Triplett
- Center for Neuroscience Research, Children's National Hospital, Washington, DC, United States,Department of Pediatrics, The George Washington University School of Medicine and Health Sciences, Washington, DC, United States,*Correspondence: Jason W. Triplett ✉
| | - Benjamin A. Rowland
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC, United States
| | - Michael Reber
- Donald K. Johnson Eye Institute, Krembil Research Institute, University Health Network, Toronto, ON, Canada
| |
Collapse
|
32
|
Ronconi L, Vitale A, Federici A, Mazzoni N, Battaglini L, Molteni M, Casartelli L. Neural dynamics driving audio-visual integration in autism. Cereb Cortex 2023; 33:543-556. [PMID: 35266994 DOI: 10.1093/cercor/bhac083] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/04/2022] [Indexed: 02/03/2023] Open
Abstract
Audio-visual (AV) integration plays a crucial role in supporting social functions and communication in autism spectrum disorder (ASD). However, behavioral findings remain mixed and, importantly, little is known about the underlying neurophysiological bases. Studies in neurotypical adults indicate that oscillatory brain activity in different frequencies subserves AV integration, pointing to a central role of (i) individual alpha frequency (IAF), which would determine the width of the cross-modal binding window; (ii) pre-/peri-stimulus theta oscillations, which would reflect the expectation of AV co-occurrence; (iii) post-stimulus oscillatory phase reset, which would temporally align the different unisensory signals. Here, we investigate the neural correlates of AV integration in children with ASD and typically developing (TD) peers, measuring electroencephalography during resting state and in an AV integration paradigm. As for neurotypical adults, AV integration dynamics in TD children could be predicted by the IAF measured at rest and by a modulation of anticipatory theta oscillations at single-trial level. Conversely, in ASD participants, AV integration/segregation was driven exclusively by the neural processing of the auditory stimulus and the consequent auditory-induced phase reset in visual regions, suggesting that a disproportionate elaboration of the auditory input could be the main factor characterizing atypical AV integration in autism.
Collapse
Affiliation(s)
- Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, 20132 Milan, Italy
| | - Andrea Vitale
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| | - Alessandra Federici
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy.,Sensory Experience Dependent (SEED) group, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| | - Noemi Mazzoni
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy.,Laboratory for Autism and Neurodevelopmental Disorders, Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy.,Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto, Italy
| | - Luca Battaglini
- Department of General Psychology, University of Padova, 35131 Padova, Italy.,Department of Physics and Astronomy "Galileo Galilei", University of Padova, 35131 Padova, Italy
| | - Massimo Molteni
- Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| | - Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| |
Collapse
|
33
|
Cognition Assessment Technologies on Deaf People. J Cogn 2023; 6:18. [PMID: 36910582 PMCID: PMC10000328 DOI: 10.5334/joc.262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 01/17/2023] [Indexed: 03/12/2023] Open
Abstract
In recent years there has been a growing interest in research about the different ways of processing and consolidating cognition in deaf people. It is known that hearing loss can lead to differences in some executive functions like control inhibitory or working memory. This literature review describes executive functions in deaf people and how they could be evaluated through technological devices complementing traditional assessments, like neuropsychological batteries. We identified biometric devices, digital and physical interfaces, and software from the literature, whose goal is to design or adapt technology to assess some cognition domains in several ways. The results of the review suggest the need to understand the cognitive phenomenon that significantly impacts the context of deaf people; moreover, it becomes relevant as a line of research in the Cognitive Science of Hearing. Using technologies to measure them and gain a better understanding of cognition in deaf people may provide possibilities for designing or adapting targeted educational or therapeutic strategies.
Collapse
|
34
|
Feldman JI, Tu A, Conrad JG, Kuang W, Santapuram P, Woynaroski TG. The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum. Multisens Res 2022; 36:57-74. [PMID: 36731528 PMCID: PMC9924934 DOI: 10.1163/22134808-bja10087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 11/22/2022] [Indexed: 12/31/2022]
Abstract
Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7-14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory 'ba' and visual 'ga'). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., 'ba', 'ga', 'da', or 'tha'). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.
Collapse
Affiliation(s)
- Jacob I. Feldman
- Department of Hearing and Speech Sciences, Vanderbilt
University Medical Center, Nashville, TN, USA
- Frist Center for Autism and Innovation, Vanderbilt
University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University,
Nashville, TN, USA
- Present Address: Department of Otolaryngology and
Communication Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Julie G. Conrad
- Neuroscience Undergraduate Program, Vanderbilt University,
Nashville, TN, USA
- Present Address: Department of Pediatrics, University of
Illinois, Chicago, IL, USA
| | - Wayne Kuang
- Neuroscience Undergraduate Program, Vanderbilt University,
Nashville, TN, USA
- Present Address: Department of Pediatrics, Los Angeles
County and University of Southern California (LAC+USC) Medical Center, University of
Southern California, Los Angeles, CA, USA
| | - Pooja Santapuram
- Neuroscience Undergraduate Program, Vanderbilt University,
Nashville, TN, USA
- Present Address: Department of Anesthesiology, Columbia
University Irving Medical Center, New York, NY, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt
University Medical Center, Nashville, TN, USA
- Frist Center for Autism and Innovation, Vanderbilt
University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical
Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University,
Nashville, TN, USA
| |
Collapse
|
35
|
The development of audio-visual temporal precision precedes its rapid recalibration. Sci Rep 2022; 12:21591. [PMID: 36517503 PMCID: PMC9751280 DOI: 10.1038/s41598-022-25392-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022] Open
Abstract
Through development, multisensory systems reach a balance between stability and flexibility: the systems integrate optimally cross-modal signals from the same events, while remaining adaptive to environmental changes. Is continuous intersensory recalibration required to shape optimal integration mechanisms, or does multisensory integration develop prior to recalibration? Here, we examined the development of multisensory integration and rapid recalibration in the temporal domain by re-analyzing published datasets for audio-visual, audio-tactile, and visual-tactile combinations. Results showed that children reach an adult level of precision in audio-visual simultaneity perception and show the first sign of rapid recalibration at 9 years of age. In contrast, there was very weak rapid recalibration for other cross-modal combinations at all ages, even when adult levels of temporal precision had developed. Thus, the development of audio-visual rapid recalibration appears to require the maturation of temporal precision. It may serve to accommodate distance-dependent travel time differences between light and sound.
Collapse
|
36
|
Leisman G. On the Application of Developmental Cognitive Neuroscience in Educational Environments. Brain Sci 2022; 12:1501. [PMID: 36358427 PMCID: PMC9688360 DOI: 10.3390/brainsci12111501] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/25/2022] [Accepted: 11/01/2022] [Indexed: 09/29/2023] Open
Abstract
The paper overviews components of neurologic processing efficiencies to develop innovative methodologies and thinking to school-based applications and changes in educational leadership based on sound findings in the cognitive neurosciences applied to schools and learners. Systems science can allow us to better manage classroom-based learning and instruction on the basis of relatively easily evaluated efficiencies or inefficiencies and optimization instead of simply examining achievement. "Medicalizing" the learning process with concepts such as "learning disability" or employing grading methods such as pass-fail does little to aid in understanding the processes that learners employ to acquire, integrate, remember, and apply information learned. The paper endeavors to overview and provided reference to tools that can be employed that allow a better focus on nervous system-based strategic approaches to classroom learning.
Collapse
Affiliation(s)
- Gerry Leisman
- Movement and Cognition Laboratory, Department of Physical Therapy, University of Haifa, Haifa 3498838, Israel; or
- Department of Neurology, Universidad de Ciencias Médicas de la Habana, Havana 11300, Cuba
| |
Collapse
|
37
|
Villwock A, Grin K. Somatosensory processing in deaf and deafblind individuals: How does the brain adapt as a function of sensory and linguistic experience? A critical review. Front Psychol 2022; 13:938842. [PMID: 36324786 PMCID: PMC9618853 DOI: 10.3389/fpsyg.2022.938842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 09/22/2022] [Indexed: 11/17/2022] Open
Abstract
How do deaf and deafblind individuals process touch? This question offers a unique model to understand the prospects and constraints of neural plasticity. Our brain constantly receives and processes signals from the environment and combines them into the most reliable information content. The nervous system adapts its functional and structural organization according to the input, and perceptual processing develops as a function of individual experience. However, there are still many unresolved questions regarding the deciding factors for these changes in deaf and deafblind individuals, and so far, findings are not consistent. To date, most studies have not taken the sensory and linguistic experiences of the included participants into account. As a result, the impact of sensory deprivation vs. language experience on somatosensory processing remains inconclusive. Even less is known about the impact of deafblindness on brain development. The resulting neural adaptations could be even more substantial, but no clear patterns have yet been identified. How do deafblind individuals process sensory input? Studies on deafblindness have mostly focused on single cases or groups of late-blind individuals. Importantly, the language backgrounds of deafblind communities are highly variable and include the usage of tactile languages. So far, this kind of linguistic experience and its consequences have not been considered in studies on basic perceptual functions. Here, we will provide a critical review of the literature, aiming at identifying determinants for neuroplasticity and gaps in our current knowledge of somatosensory processing in deaf and deafblind individuals.
Collapse
Affiliation(s)
- Agnes Villwock
- Sign Languages, Department of Rehabilitation Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
38
|
Wang MB, Halassa MM. Thalamocortical contribution to flexible learning in neural systems. Netw Neurosci 2022; 6:980-997. [PMID: 36875011 PMCID: PMC9976647 DOI: 10.1162/netn_a_00235] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/19/2022] [Indexed: 11/04/2022] Open
Abstract
Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, and environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain's specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility, while the slower one enables generalization to new contexts.
Collapse
Affiliation(s)
- Mien Brabeeba Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
39
|
Pei C, Qiu Y, Li F, Huang X, Si Y, Li Y, Zhang X, Chen C, Liu Q, Cao Z, Ding N, Gao S, Alho K, Yao D, Xu P. The different brain areas occupied for integrating information of hierarchical linguistic units: a study based on EEG and TMS. Cereb Cortex 2022; 33:4740-4751. [PMID: 36178127 DOI: 10.1093/cercor/bhac376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Xinxiang, 453003, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chunli Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, Sichuan, 610066, China
| | - Zehong Cao
- STEM, Mawson Lakes Campus, University of South Australia, Adelaide, SA 5095, Australia
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007, China
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, FI 00014, Finland
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China.,Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China
| |
Collapse
|
40
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
41
|
Roth KC, Clayton KRH, Reynolds GD. Infant selective attention to native and non-native audiovisual speech. Sci Rep 2022; 12:15781. [PMID: 36138107 PMCID: PMC9500058 DOI: 10.1038/s41598-022-19704-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 09/02/2022] [Indexed: 11/24/2022] Open
Abstract
The current study utilized eye-tracking to investigate the effects of intersensory redundancy and language on infant visual attention and detection of a change in prosody in audiovisual speech. Twelve-month-old monolingual English-learning infants viewed either synchronous (redundant) or asynchronous (non-redundant) presentations of a woman speaking in native or non-native speech. Halfway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-directed speech (ADS) or vice versa. Infants focused more on the mouth of the speaker on IDS trials compared to ADS trials regardless of language or intersensory redundancy. Additionally, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native speech. Planned comparisons indicated that infants detected prosody changes across a broader range of conditions during redundant stimulus presentations. These findings shed light on the influence of language and prosody on infant attention and highlight the complexity of audiovisual speech processing in infancy.
Collapse
Affiliation(s)
- Kelly C Roth
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA
- Data Scientist at 84.51°, Cincinnati, OH, 45202, USA
| | - Kenna R H Clayton
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA
| | - Greg D Reynolds
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, 37996, USA.
| |
Collapse
|
42
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
43
|
The relationship between multisensory associative learning and multisensory integration. Neuropsychologia 2022; 174:108336. [PMID: 35872233 DOI: 10.1016/j.neuropsychologia.2022.108336] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/15/2022] [Accepted: 07/16/2022] [Indexed: 11/23/2022]
Abstract
Integrating sensory information from multiple modalities leads to more precise and efficient perception and behaviour. The process of determining which sensory information should be perceptually bound is reliant on both low-level stimulus features, as well as multisensory associations learned throughout development based on the statistics of our environment. Here, we explored the relationship between multisensory associative learning and multisensory integration using encephalography (EEG) and behavioural measures. Sixty-one participants completed a three-phase study. First, participants were exposed to novel audiovisual shape-tone pairings with frequent and infrequent stimulus pairings and complete a target detection task. EEG recordings of the mismatch negativity (MMN) and P3 were calculated as neural indices of multisensory associative learning. Next, the same learned stimulus pairs were presented in audiovisual as well as unisensory auditory and visual modalities while both early (<120 ms) and late neural indices of multisensory integration were recorded. Finally, participants completed an analogous behavioural speeded-response task, with behavioural indices of multisensory gain calculated using the Race Model. Significant relationships were found in fronto-central and occipital areas between neural measures of associative learning and both early and late indices of multisensory integration in frontal and centro-parietal areas, respectively. Participants who showed stronger indices of associative learning also exhibited stronger indices of multisensory integration of the stimuli they learned to associate. Furthermore, a significant relationship was found between neural index of early multisensory integration and behavioural indices of multisensory gain. These results provide insight into the neural underpinnings of how higher-order processes such as associative learning guide multisensory integration.
Collapse
|
44
|
Montenegro JTP, Seguin D, Duerden EG. Joint attention in infants at high familial risk for autism spectrum disorder and the association with thalamic and hippocampal macrostructure. Cereb Cortex Commun 2022; 3:tgac029. [PMID: 36072708 PMCID: PMC9441013 DOI: 10.1093/texcom/tgac029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/30/2022] [Accepted: 07/05/2022] [Indexed: 11/12/2022] Open
Abstract
Abstract
Autism spectrum disorder (ASD) is a heritable neurodevelopmental disorder. Infants diagnosed with ASD can show impairments in spontaneous gaze-following and will seldom engage in joint attention (JA). The ability to initiate JA (IJA) can be more significantly impaired than the ability to respond to JA (RJA). In a longitudinal study, 101 infants who had a familial risk for ASD were enrolled (62% males). Participants completed magnetic resonance imaging scans at 4 or 6 months of age. Subcortical volumes (thalamus, hippocampus, amygdala, basal ganglia, ventral diencephalon, and cerebellum) were automatically extracted. Early gaze and JA behaviors were assessed with standardized measures. The majority of infants were IJA nonresponders (n = 93, 92%), and over half were RJA nonresponders (n = 50, 52%). In the nonresponder groups, models testing the association of subcortical volumes with later ASD diagnosis accounted for age, sex, and cerebral volumes. In the nonresponder IJA group, using regression method, the left hippocampus (B = −0.009, aOR = 0.991, P = 0.025), the right thalamus (B = −0.016, aOR = 0.984, P = 0.026), as well as the left thalamus (B = 0.015, aOR = 1.015, P = 0.019), predicted later ASD diagnosis. Alterations in thalamic and hippocampal macrostructure in at-risk infants who do not engage in IJA may reflect an enhanced vulnerability and may be the key predictors of later ASD development.
Collapse
Affiliation(s)
- Julia T P Montenegro
- Applied Psychology , Faculty of Education, , London, Ontario N6G1G7, Canada
- Western University, Faculty of Education Building 1137 Western Road , Faculty of Education, , London, Ontario N6G1G7, Canada
| | - Diane Seguin
- Applied Psychology , Faculty of Education, , London, Ontario N6G1G7 , Canada
- Western University, Faculty of Education Building 1137 Western Road , Faculty of Education, , London, Ontario N6G1G7 , Canada
- Physiology & Pharmacology , Schulich School of Medicine and Dentistry, , Medical Science Building, Room 216 1151 Richmond St, London, Ontario N6A5C1 , Canada
- Western University , Schulich School of Medicine and Dentistry, , Medical Science Building, Room 216 1151 Richmond St, London, Ontario N6A5C1 , Canada
| | - Emma G Duerden
- Applied Psychology , Faculty of Education, , Faculty of Education Building 1137 Western Road, London, Ontario N6G1G7 , Canada
- Western University , Faculty of Education, , Faculty of Education Building 1137 Western Road, London, Ontario N6G1G7 , Canada
- Western Institute for Neuroscience, Western University, The Brain and Mind Institute Western Interdisciplinary Research Building , Room 3190 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Biomedical Engineering , Faculty of Engineering, , Amit Chakma Engineering Building, Room 2405 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Western University , Faculty of Engineering, , Amit Chakma Engineering Building, Room 2405 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Psychiatry , Schulich School of Medicine and Dentistry, , Parkwood Institute Mental Health Care Building, F4-430, London, Ontario N6C0A7 , Canada
- University of Western Ontario , Schulich School of Medicine and Dentistry, , Parkwood Institute Mental Health Care Building, F4-430, London, Ontario N6C0A7 , Canada
| |
Collapse
|
45
|
Digital haptics improve speed of visual search performance in a dual-task setting. Sci Rep 2022; 12:9728. [PMID: 35710569 PMCID: PMC9203452 DOI: 10.1038/s41598-022-13827-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/27/2022] [Indexed: 11/16/2022] Open
Abstract
Dashboard-mounted touchscreen tablets are now common in vehicles. Screen/phone use in cars likely shifts drivers’ attention away from the road and contributes to risk of accidents. Nevertheless, vision is subject to multisensory influences from other senses. Haptics may help maintain or even increase visual attention to the road, while still allowing for reliable dashboard control. Here, we provide a proof-of-concept for the effectiveness of digital haptic technologies (hereafter digital haptics), which use ultrasonic vibrations on a tablet screen to render haptic perceptions. Healthy human participants (N = 25) completed a divided-attention paradigm. The primary task was a centrally-presented visual conjunction search task, and the secondary task entailed control of laterally-presented sliders on the tablet. Sliders were presented visually, haptically, or visuo-haptically and were vertical, horizontal or circular. We reasoned that the primary task would be performed best when the secondary task was haptic-only. Reaction times (RTs) on the visual search task were fastest when the tablet task was haptic-only. This was not due to a speed-accuracy trade-off; there was no evidence for modulation of VST accuracy according to modality of the tablet task. These results provide the first quantitative support for introducing digital haptics into vehicle and similar contexts.
Collapse
|
46
|
Bowsher-Murray C, Gerson S, von dem Hagen E, Jones CRG. The Components of Interpersonal Synchrony in the Typical Population and in Autism: A Conceptual Analysis. Front Psychol 2022; 13:897015. [PMID: 35734455 PMCID: PMC9208202 DOI: 10.3389/fpsyg.2022.897015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/16/2022] [Indexed: 01/18/2023] Open
Abstract
Interpersonal synchrony - the tendency for social partners to temporally co-ordinate their behaviour when interacting - is a ubiquitous feature of social interactions. Synchronous interactions play a key role in development, and promote social bonding and a range of pro-social behavioural outcomes across the lifespan. The process of achieving and maintaining interpersonal synchrony is highly complex, with inputs required from across perceptual, temporal, motor, and socio-cognitive domains. In this conceptual analysis, we synthesise evidence from across these domains to establish the key components underpinning successful non-verbal interpersonal synchrony, how such processes interact, and factors that may moderate their operation. We also consider emerging evidence that interpersonal synchrony is reduced in autistic populations. We use our account of the components contributing to interpersonal synchrony in the typical population to identify potential points of divergence in interpersonal synchrony in autism. The relationship between interpersonal synchrony and broader aspects of social communication in autism are also considered, together with implications for future research.
Collapse
Affiliation(s)
- Claire Bowsher-Murray
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Sarah Gerson
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Elisabeth von dem Hagen
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Brain Imaging Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Catherine R. G. Jones
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
47
|
Zhou HY, Yang HX, Wei Z, Wan GB, Lui SSY, Chan RCK. Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study. Psych J 2022; 11:409-418. [PMID: 35350086 DOI: 10.1002/pchj.538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/09/2022] [Accepted: 02/17/2022] [Indexed: 11/05/2022]
Abstract
During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3-6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Wei
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Guo-Bin Wan
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Simon S Y Lui
- Department of Psychiatry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
48
|
Marsicano G, Cerpelloni F, Melcher D, Ronconi L. Lower multisensory temporal acuity in individuals with high schizotypal traits: a web-based study. Sci Rep 2022; 12:2782. [PMID: 35177673 PMCID: PMC8854550 DOI: 10.1038/s41598-022-06503-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/25/2022] [Indexed: 12/02/2022] Open
Abstract
Natural events are often multisensory, requiring the brain to combine information from the same spatial location and timing, across different senses. The importance of temporal coincidence has led to the introduction of the temporal binding window (TBW) construct, defined as the time range within which multisensory inputs are highly likely to be perceptually bound into a single entity. Anomalies in TBWs have been linked to confused perceptual experiences and inaccurate filtering of sensory inputs coming from different environmental sources. Indeed, larger TBWs have been associated with disorders such as schizophrenia and autism and are also correlated to a higher level of subclinical traits of these conditions in the general population. Here, we tested the feasibility of using a web-based version of a classic audio-visual simultaneity judgment (SJ) task with simple flash-beep stimuli in order to measure multisensory temporal acuity and its relationship with schizotypal traits as measured in the general population. Results show that: (i) the response distribution obtained in the web-based SJ task was strongly similar to those reported by studies carried out in controlled laboratory settings, and (ii) lower multisensory temporal acuity was associated with higher schizotypal traits in the “cognitive-perceptual” domains. Our findings reveal the possibility of adequately using a web-based audio-visual SJ task outside a controlled laboratory setting, available to a more diverse and representative pool of participants. These results provide additional evidence for a close relationship between lower multisensory acuity and the expression of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Gianluca Marsicano
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Filippo Cerpelloni
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy.,Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, KU Leuve, Leuven, Belgium.,Institute of Research in Psychology (IPSY) & Institute of Neuroscience (IoNS)-University of Louvain (UCLouvain), Leuven, Belgium
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy. .,Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
49
|
Della Longa L, Valori I, Farroni T. Interpersonal Affective Touch in a Virtual World: Feeling the Social Presence of Others to Overcome Loneliness. Front Psychol 2022; 12:795283. [PMID: 35087455 PMCID: PMC8787079 DOI: 10.3389/fpsyg.2021.795283] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 12/10/2021] [Indexed: 12/25/2022] Open
Abstract
Humans are by nature social beings tuned to communicate and interact from the very beginning of their lives. The sense of touch represents the most direct and intimate channel of communication and a powerful means of connection between the self and the others. In our digital age, the development and diffusion of internet-based technologies and virtual environments offer new opportunities of communication overcoming physical distance. It however, happens that social interactions are often mediated, and the tactile aspects of communication are overlooked, thus diminishing the feeling of social presence, which may contribute to an increased sense of social disconnection and loneliness. The current manuscript aims to review the extant literature about the socio-affective dimension of touch and current advancements in interactive virtual environments in order to provide a new perspective on multisensory virtual communication. Specifically, we suggest that interpersonal affective touch might critically impact virtual social exchanges, promoting a sense of co-presence and social connection between individuals, possibly overcoming feelings of sensory loneliness. This topic of investigation will be of crucial relevance from a theoretical perspective aiming to understand how we integrate multisensory signals in processing and making sense of interpersonal exchanges, this is important in both typical and atypical populations. Moreover, it will pave the way to promising applications by exploring the possibility to use technical innovations to communicate more interactively in the case of people who suffer from social isolation and disconnection from others.
Collapse
Affiliation(s)
- Letizia Della Longa
- Department of Developmental Psychology and Socialization, University of Padova, Padua, Italy
| | - Irene Valori
- Department of Developmental Psychology and Socialization, University of Padova, Padua, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, Padua, Italy
| |
Collapse
|
50
|
Esplendori GF, Kobayashi RM, Püschel VADA. Multisensory integration approach, cognitive domains, meaningful learning: reflections for undergraduate nursing education. Rev Esc Enferm USP 2022; 56:e20210381. [PMID: 35421209 PMCID: PMC10101150 DOI: 10.1590/1980-220x-reeusp-2021-0381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 02/07/2022] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Teaching with a multisensory approach helps students link new information to prior knowledge and understand relationships between concepts. This study aimed to reflect on convergences between the Multisensory Integration Approach Model with the Learning Assimilation Theory and Meaningful Retention with Bloom’s Cognitive Process Domain, and to propose a taxonomic table of lesson planning for teaching Acute Coronary Syndrome, considering the confluence of these references. The three frameworks consider the importance of students’ prior knowledge, the process of abstraction and generalization of knowledge, and the relationship between working and long-term memory. By observing such convergences and the taxonomic table produced, it is observed that teaching topics of interest to nursing undergraduate students, adopting the Multisensory Integration Approach Model as a taxonomic table component (pre-organizing or recall activities to arouse different sensory perceptions aligned with instructional objectives and forms of assessment), in the light of the Learning Assimilation Theory and Meaningful Retention, has the potential to favor the reception and processing of instructional content.
Collapse
|