51
|
Scurry AN, Lovelady Z, Lemus DM, Jiang F. Impoverished Inhibitory Control Exacerbates Multisensory Impairments in Older Fallers. Front Aging Neurosci 2021; 13:700787. [PMID: 34630067 PMCID: PMC8500399 DOI: 10.3389/fnagi.2021.700787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 08/27/2021] [Indexed: 11/24/2022] Open
Abstract
Impaired temporal perception of multisensory cues is a common phenomenon observed in older adults that can lead to unreliable percepts of the external world. For instance, the sound induced flash illusion (SIFI) can induce an illusory percept of a second flash by presenting a beep close in time to an initial flash-beep pair. Older adults that have enhanced susceptibility to a fall demonstrate significantly stronger illusion percepts during the SIFI task compared to those older adults without any history of falling. We hypothesize that a global inhibitory deficit may be driving the impairments across both postural stability and multisensory function in older adults with a fall history (FH). We investigated oscillatory activity and perceptual performance during the SIFI task, to understand how active sensory processing, measured by gamma (30–80 Hz) power, was regulated by alpha activity (8–13 Hz), oscillations that reflect inhibitory control. Compared to young adults (YA), the FH and non-faller (NF) groups demonstrated enhanced susceptibility to the SIFI. Further, the FH group had significantly greater illusion strength compared to the NF group. The FH group also showed significantly impaired performance relative to YA during congruent trials (2 flash-beep pairs resulting in veridical perception of 2 flashes). In illusion compared to non-illusion trials, the NF group demonstrated reduced alpha power (or diminished inhibitory control). Relative to YA and NF, the FH group showed reduced phase-amplitude coupling between alpha and gamma activity in non-illusion trials. This loss of inhibitory capacity over sensory processing in FH compared to NF suggests a more severe change than that consequent of natural aging.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Zachary Lovelady
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Daniela M Lemus
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
52
|
Perquin MN, Taylor M, Lorusso J, Kolasinski J. Directional biases in whole hand motion perception revealed by mid-air tactile stimulation. Cortex 2021; 142:221-236. [PMID: 34280867 PMCID: PMC8422163 DOI: 10.1016/j.cortex.2021.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 12/31/2020] [Accepted: 03/30/2021] [Indexed: 11/22/2022]
Abstract
Many emerging technologies are attempting to leverage the tactile domain to convey complex spatiotemporal information translated directly from the visual domain, such as shape and motion. Despite the intuitive appeal of touch for communication, we do not know to what extent the hand can substitute for the retina in this way. Here we ask whether the tactile system can be used to perceive complex whole hand motion stimuli, and whether it exhibits the same kind of established perceptual biases as reported in the visual domain. Using ultrasound stimulation, we were able to project complex moving dot percepts onto the palm in mid-air, over 30 cm above an emitter device. We generated dot kinetogram stimuli involving motion in three different directional axes ('Horizontal', 'Vertical', and 'Oblique') on the ventral surface of the hand. Using Bayesian statistics, we found clear evidence that participants were able to discriminate tactile motion direction. Furthermore, there was a marked directional bias in motion perception: participants were both better and more confident at discriminating motion in the vertical and horizontal axes of the hand, compared to those stimuli moving obliquely. This pattern directly mirrors the perceptional biases that have been robustly reported in the visual field, termed the 'Oblique Effect'. These data demonstrate the existence of biases in motion perception that transcend sensory modality. Furthermore, we extend the Oblique Effect to a whole hand scale, using motion stimuli presented on the broad and relatively low acuity surface of the palm, away from the densely innervated and much studied fingertips. These findings highlight targeted ultrasound stimulation as a versatile method to convey potentially complex spatial and temporal information without the need for a user to wear or touch a device.
Collapse
Affiliation(s)
- Marlou N Perquin
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; Biopsychology & Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Germany; Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Germany.
| | - Mason Taylor
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| | - Jarred Lorusso
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; School of Biological Sciences, University of Manchester, Manchester, UK
| | - James Kolasinski
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| |
Collapse
|
53
|
Pahor A, Collins C, Smith RN, Moon A, Stavropoulos T, Silva I, Peng E, Jaeggi SM, Seitz AR. Multisensory Facilitation of Working Memory Training. JOURNAL OF COGNITIVE ENHANCEMENT 2021; 5:386-395. [PMID: 34485810 PMCID: PMC8415034 DOI: 10.1007/s41465-020-00196-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 10/16/2020] [Indexed: 11/29/2022]
Abstract
Research suggests that memorization of multisensory stimuli benefits performance compared to memorization of unisensory stimuli; however, little is known about multisensory facilitation in the context of working memory (WM) training and transfer. To investigate this, 240 adults were randomly assigned to an N-back training task that consisted of visual-only stimuli, alternating visual and auditory blocks, or audio-visual (multisensory) stimuli, or to a passive control group. Participants in the active groups completed 13 sessions of N-back training (6.7 hours in total) and all groups completed a battery of WM tasks: untrained N-back tasks, Corsi Blocks, Sequencing, and Symmetry Span. The Multisensory group showed similar training N-level gain compared to the Visual Only group, and both of these groups outperformed the Alternating group on the training task. As expected, all three active groups significantly improved on untrained visual N-back tasks compared to the Control group. In contrast, the Multisensory group showed significantly greater gains on the Symmetry Span task and to a certain extent on the Sequencing task compared to other groups. These results tentatively suggest that incorporating multisensory objects in a WM training protocol can benefit performance on the training task and potentially facilitate transfer to complex WM span tasks.
Collapse
Affiliation(s)
- Anja Pahor
- University of California, Riverside, Department of Psychology, Riverside, California, USA
- University of California, Irvine, School of Education, Irvine, California, USA
| | - Cindy Collins
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Rachel N Smith
- University of California, Irvine, School of Education, Irvine, California, USA
| | - Austin Moon
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Trevor Stavropoulos
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Ilse Silva
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Elaine Peng
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Susanne M Jaeggi
- University of California, Irvine, School of Education, School of Social Sciences (Department of Cognitive Sciences), Irvine, California, USA
| | - Aaron R Seitz
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| |
Collapse
|
54
|
Gao Q, Xiang Y, Zhang J, Luo N, Liang M, Gong L, Yu J, Cui Q, Sepulcre J, Chen H. A reachable probability approach for the analysis of spatio-temporal dynamics in the human functional network. Neuroimage 2021; 243:118497. [PMID: 34428571 DOI: 10.1016/j.neuroimage.2021.118497] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 08/06/2021] [Accepted: 08/20/2021] [Indexed: 12/25/2022] Open
Abstract
The dynamic architecture of the human brain has been consistently observed. However, there is still limited modeling work to elucidate how neuronal circuits are hierarchically and flexibly organized in functional systems. Here we proposed a reachable probability approach based on non-homogeneous Markov chains, to characterize all possible connectivity flows and the hierarchical structure of brain functional systems at the dynamic level. We proved at the theoretical level the convergence of the functional brain network system, and demonstrated that this approach is able to detect network steady states across connectivity structure, particularly in areas of the default mode network. We further explored the dynamically hierarchical functional organization centered at the primary sensory cortices. We observed smaller optimal reachable steps to their local functional regions, and differentiated patterns in larger optimal reachable steps for primary perceptual modalities. The reachable paths with the largest and second largest transition probabilities between primary sensory seeds via multisensory integration regions were also tracked to explore the flexibility and plasticity of the multisensory integration. The present work provides a novel approach to depict both the stable and flexible hierarchical connectivity organization of the human brain.
Collapse
Affiliation(s)
- Qing Gao
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Yu Xiang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiabao Zhang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Ning Luo
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Minfeng Liang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Lisha Gong
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiali Yu
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Qian Cui
- School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jorge Sepulcre
- Gordon Center for Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Huafu Chen
- High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiology, First Affiliated Hospital to Army Medical University, Chongqing 400038, China.
| |
Collapse
|
55
|
Valori I, McKenna-Plumley PE, Bayramova R, Farroni T. Perception and Motion in Real and Virtual Environments: A Narrative Review of Autism Spectrum Disorders. Front Psychol 2021; 12:708229. [PMID: 34322072 PMCID: PMC8311234 DOI: 10.3389/fpsyg.2021.708229] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
Atypical sensorimotor developmental trajectories greatly contribute to the profound heterogeneity that characterizes Autism Spectrum Disorders (ASD). Individuals with ASD manifest deviations in sensorimotor processing with early markers in the use of sensory information coming from both the external world and the body, as well as motor difficulties. The cascading effect of these impairments on the later development of higher-order abilities (e.g., executive functions and social communication) underlines the need for interventions that focus on the remediation of sensorimotor integration skills. One of the promising technologies for such stimulation is Immersive Virtual Reality (IVR). In particular, head-mounted displays (HMDs) have unique features that fully immerse the user in virtual realities which disintegrate and otherwise manipulate multimodal information. The contribution of each individual sensory input and of multisensory integration to perception and motion can be evaluated and addressed according to a user’s clinical needs. HMDs can therefore be used to create virtual environments aimed at improving people’s sensorimotor functioning, with strong potential for individualization for users. Here we provide a narrative review of the sensorimotor atypicalities evidenced by children and adults with ASD, alongside some specific relevant features of IVR technology. We discuss how individuals with ASD may interact differently with IVR versus real environments on the basis of their specific atypical sensorimotor profiles and describe the unique potential of HMD-delivered immersive virtual environments to this end.
Collapse
Affiliation(s)
- Irene Valori
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | | | - Rena Bayramova
- Department of General Psychology, University of Padua, Padua, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| |
Collapse
|
56
|
Wiring of higher-order cortical areas: Spatiotemporal development of cortical hierarchy. Semin Cell Dev Biol 2021; 118:35-49. [PMID: 34034988 DOI: 10.1016/j.semcdb.2021.05.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/27/2021] [Accepted: 05/08/2021] [Indexed: 01/04/2023]
Abstract
A hierarchical development of cortical areas was suggested over a century ago, but the diversity and complexity of cortical hierarchy properties have so far prevented a formal demonstration. The aim of this review is to clarify the similarities and differences in the developmental processes underlying cortical development of primary and higher-order areas. We start by recapitulating the historical and recent advances underlying the biological principle of cortical hierarchy in adults. We then revisit the arguments for a hierarchical maturation of cortical areas, and further integrate the principles of cortical areas specification during embryonic and postnatal development. We highlight how the dramatic expansion in cortical size might have contributed to the increased number of association areas sustaining cognitive complexification in evolution. Finally, we summarize the recent observations of an alteration of cortical hierarchy in neuropsychiatric disorders and discuss their potential developmental origins.
Collapse
|
57
|
Anobile G, Morrone MC, Ricci D, Gallini F, Merusi I, Tinelli F. Typical Crossmodal Numerosity Perception in Preterm Newborns. Multisens Res 2021; 34:1-22. [PMID: 33984832 DOI: 10.1163/22134808-bja10051] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 04/07/2021] [Indexed: 11/19/2022]
Abstract
Premature birth is associated with a high risk of damage in the parietal cortex, a key area for numerical and non-numerical magnitude perception and mathematical reasoning. Children born preterm have higher rates of learning difficulties for school mathematics. In this study, we investigated how preterm newborns (born at 28-34 weeks of gestation age) and full-term newborns respond to visual numerosity after habituation to auditory stimuli of different numerosities. The results show that the two groups have a similar preferential looking response to visual numerosity, both preferring the incongruent set after crossmodal habituation. These results suggest that the numerosity system is resistant to prematurity.
Collapse
Affiliation(s)
- Giovanni Anobile
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, 50135 Florence, Italy
| | - Maria C Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, 56123 Pisa, Italy
| | - Daniela Ricci
- National Centre of Services and Research for Prevention of Blindness and Rehabilitation of Visually Impaired, Rome, Italy
- Department of Pediatrics, Catholic University of the Sacred Heart, Rome, Italy
| | - Francesca Gallini
- Department of Pediatrics, Catholic University of the Sacred Heart, Rome, Italy
| | | | - Francesca Tinelli
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, 56128 Calambrone, Pisa, Italy
| |
Collapse
|
58
|
Lewkowicz DJ, Schmuckler M, Agrawal V. The multisensory cocktail party problem in adults: Perceptual segregation of talking faces on the basis of audiovisual temporal synchrony. Cognition 2021; 214:104743. [PMID: 33940250 DOI: 10.1016/j.cognition.2021.104743] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
Social interactions often involve a cluttered multisensory scene consisting of multiple talking faces. We investigated whether audiovisual temporal synchrony can facilitate perceptual segregation of talking faces. Participants either saw four identical or four different talking faces producing temporally jittered versions of the same visible speech utterance and heard the audible version of the same speech utterance. The audible utterance was either synchronized with the visible utterance produced by one of the talking faces or not synchronized with any of them. Eye tracking indicated that participants exhibited a marked preference for the synchronized talking face, that they gazed more at the mouth than the eyes overall, that they gazed more at the eyes of an audiovisually synchronized than a desynchronized talking face, and that they gazed more at the mouth when all talking faces were audiovisually desynchronized. These findings demonstrate that audiovisual temporal synchrony plays a major role in perceptual segregation of multisensory clutter and that adults rely on differential scanning strategies of a talker's eyes and mouth to discover sources of multisensory coherence.
Collapse
Affiliation(s)
- David J Lewkowicz
- Haskins Laboratories, New Haven, CT, USA; Yale Child Study Center, New Haven, CT, USA.
| | - Mark Schmuckler
- Department of Psychology, University of Toronto at Scarborough, Toronto, Canada
| | | |
Collapse
|
59
|
Fausto-Sterling A. A Dynamic Systems Framework for Gender/Sex Development: From Sensory Input in Infancy to Subjective Certainty in Toddlerhood. Front Hum Neurosci 2021; 15:613789. [PMID: 33897391 PMCID: PMC8062721 DOI: 10.3389/fnhum.2021.613789] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 03/01/2021] [Indexed: 11/13/2022] Open
Abstract
From birth to 15 months infants and caregivers form a fundamentally intersubjective, dyadic unit within which the infant's ability to recognize gender/sex in the world develops. Between about 18 and 36 months the infant accumulates an increasingly clear and subjective sense of self as female or male. We know little about how the precursors to gender/sex identity form during the intersubjective period, nor how they transform into an independent sense of self by 3 years of age. In this Theory and Hypothesis article I offer a general framework for thinking about this problem. I propose that through repetition and patterning, the dyadic interactions in which infants and caregivers engage imbue the infant with an embodied, i.e., sensori-motor understanding of gender/sex. During this developmental period (which I label Phase 1) gender/sex is primarily an intersubjective project. From 15 to 18 months (which I label Phase 2) there are few reports of newly appearing gender/sex behavioral differences, and I hypothesize that this absence reflects a period of developmental instability during which there is a transition from gender/sex as primarily inter-subjective to gender/sex as primarily subjective. Beginning at 18 months (i.e., the start of Phase 3), a toddler's subjective sense of self as having a gender/sex emerges, and it solidifies by 3 years of age. I propose a dynamic systems perspective to track how infants first assimilate gender/sex information during the intersubjective period (birth to 15 months); then explore what changes might occur during a hypothesized phase transition (15 to 18 months), and finally, review the emergence and initial stabilization of individual subjectivity-the period from 18 to 36 months. The critical questions explored focus on how to model and translate data from very different experimental disciplines, especially neuroscience, physiology, developmental psychology and cognitive development. I close by proposing the formation of a research consortium on gender/sex development during the first 3 years after birth.
Collapse
Affiliation(s)
- Anne Fausto-Sterling
- Department of Molecular Biology, Cell Biology, and Biochemistry, Brown University, Providence, RI, United States
| |
Collapse
|
60
|
The development of visuotactile congruency effects for sequences of events. J Exp Child Psychol 2021; 207:105094. [PMID: 33714049 DOI: 10.1016/j.jecp.2021.105094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/11/2020] [Accepted: 01/07/2021] [Indexed: 11/23/2022]
Abstract
Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5-7 years (n = 19), 8 and 9 years (n = 21), and 10-12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5-7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing.
Collapse
|
61
|
Dorn K, Cauvet E, Weinert S. A cross‐linguistic study of multisensory perceptual narrowing in German and Swedish infants during the first year of life. INFANT AND CHILD DEVELOPMENT 2021. [DOI: 10.1002/icd.2217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Katharina Dorn
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| | - Elodie Cauvet
- Department of Women's and Children's health Karolinska Institute of Neurodevelopmental Disorders (KIND) Stockholm Sweden
| | - Sabine Weinert
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| |
Collapse
|
62
|
Hearing Better with the Right Eye? The Lateralization of Multisensory Processing Affects Auditory Learning in Northern Bobwhite Quail ( Colinus Virginianus) Chicks. Appl Anim Behav Sci 2021; 236. [PMID: 33776174 DOI: 10.1016/j.applanim.2021.105274] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Precocial avian species exhibit a high degree of lateralization of perceptual and motor abilities, including preferential eye use for tasks such as social recognition and predator detection. Such lateralization has been related, in part, to differential experience prior to hatch. That is, due to spatial and resulting postural constraints late in incubation, one eye and hemisphere-generally the right eye / left hemisphere-receive greater amounts of stimulation than the contralateral eye / hemisphere. This raises the possibility that the left hemisphere may specialize or show relative advantages in integrating information across visual and auditory modalities, given that it typically receives greater amounts of multimodal auditory and visual stimulation prior to hatch. The present study represents an initial investigation of this question in a precocial avian species, the Northern bobwhite quail (Colinus virginianus). Day-old bobwhite chicks received 5 min training sessions in which they vocalized to receive contingent playback of a bobwhite maternal call, presented with or without a light that flashed in synchrony with the notes of the call (i.e., bimodal versus unimodal exposure, respectively). Chicks were trained with or without eye patches that allowed them to experience the visual component of the bimodal stimulus with only the left eye (LE), right eye (RE), or both eyes (i.e., binocular; BIN). Finally, the light was placed in various positions relative to the speakers playing the maternal call across three experiments. 24 hrs later chicks were provided a simultaneous choice test between the familiarized and a novel bobwhite maternal call. Given that the right eye and ear typically face outward and are thus unoccluded by the body during late prenatal development, we hypothesized that RE chicks would show facilitated learning under bimodal conditions compared to all other training conditions. This hypothesis was partially confirmed in Experiment 1, when the light was positioned 40 cm above the source of the maternal call. However, we also observed evidence of suppressed learning in chicks provided BIN exposure to the bimodal audio-visual stimulus that was not present during auditory-only training. Experiments 2 and 3 demonstrated that this was likely related to activation of a left-hemisphere dependent fear response when the left eye was exposed to a visual stimulus that loomed above the auditory stimulus. These results indicate that multisensory processing is lateralized in a precocial bird and that these species may thus provide a unique model for studying experience-dependent plasticity of intersensory perception.
Collapse
|
63
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
64
|
Murai T, Sukoff Rizzo SJ. The Importance of Complementary Collaboration of Researchers, Veterinarians, and Husbandry Staff in the Successful Training of Marmoset Behavioral Assays. ILAR J 2021; 61:230-247. [PMID: 33501501 DOI: 10.1093/ilar/ilaa024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 08/31/2020] [Accepted: 09/09/2020] [Indexed: 12/30/2022] Open
Abstract
Interest in marmosets as research models has seen exponential growth over the last decade, especially given that the research community is eager to improve on gaps with historical animal models for behavioral and cognitive disorders. The spectrum of human disease traits that present naturally in marmosets, as well as the range of analogous human behaviors that can be assessed in marmosets, makes them ideally suited as translational models for behavioral and cognitive disorders. Regardless of the specific research aims of any project, without close collaboration between researchers, veterinarians, and animal care staff, it would be impossible to meet these goals. Behavior is inherently variable, as are marmosets that are genetically and phenotypically diverse. Thus, to ensure rigor, reliability, and reproducibility in results, it is important that in the research environment, the animal's daily husbandry and veterinary needs are being met and align with the research goals while keeping the welfare of the animal the most critical and highest priority. Much of the information described herein provides details on key components for successful behavioral testing, based on a compendium of methods from peer-reviewed publications and our own experiences. Specific areas highlighted include habituation procedures, selection of appropriate rewards, optimization of testing environments, and ways to integrate regular veterinary and husbandry procedures into the research program with minimal disruptions to the behavioral testing plan. This article aims to provide a broad foundation for researchers new to establishing behavioral and cognitive testing paradigms in marmosets and especially for the veterinary and husbandry colleagues who are indispensable collaborators of these research projects.
Collapse
Affiliation(s)
- Takeshi Murai
- University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | | |
Collapse
|
65
|
Kaya U, Kafaligonul H. Audiovisual interactions in speeded discrimination of a visual event. Psychophysiology 2021; 58:e13777. [PMID: 33483971 DOI: 10.1111/psyp.13777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 01/07/2021] [Accepted: 01/07/2021] [Indexed: 01/10/2023]
Abstract
The integration of information from different senses is central to our perception of the external world. Audiovisual interactions have been particularly well studied in this context and various illusions have been developed to demonstrate strong influences of these interactions on the final percept. Using audiovisual paradigms, previous studies have shown that even task-irrelevant information provided by a secondary modality can change the detection and discrimination of a primary target. These modulations have been found to be significantly dependent on the relative timing between auditory and visual stimuli. Although these interactions in time have been commonly reported, we have still limited understanding of the relationship between the modulations of event-related potentials (ERPs) and final behavioral performance. Here, we aimed to shed light on this important issue by using a speeded discrimination paradigm combined with electroencephalogram (EEG). During the experimental sessions, the timing between an auditory click and a visual flash was varied over a wide range of stimulus onset asynchronies and observers were engaged in speeded discrimination of flash location. Behavioral reaction times were significantly changed by click timing. Furthermore, the modulations of evoked activities over medial parietal/parieto-occipital electrodes were associated with this effect. These modulations were within the 126-176 ms time range and more importantly, they were also correlated with the changes in reaction times. These results provide an important functional link between audiovisual interactions at early stages of sensory processing and reaction times. Together with previous research, they further suggest that early crossmodal interactions play a critical role in perceptual performance.
Collapse
Affiliation(s)
- Utku Kaya
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Informatics Institute, Middle East Technical University, Ankara, Turkey.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, USA
| | - Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| |
Collapse
|
66
|
Weatherhead D, Arredondo MM, Nácar Garcia L, Werker JF. The Role of Audiovisual Speech in Fast-Mapping and Novel Word Retention in Monolingual and Bilingual 24-Month-Olds. Brain Sci 2021; 11:brainsci11010114. [PMID: 33467100 PMCID: PMC7830540 DOI: 10.3390/brainsci11010114] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/12/2021] [Accepted: 01/14/2021] [Indexed: 11/19/2022] Open
Abstract
Three experiments examined the role of audiovisual speech on 24-month-old monolingual and bilinguals’ performance in a fast-mapping task. In all three experiments, toddlers were exposed to familiar trials which tested their knowledge of known word–referent pairs, disambiguation trials in which novel word–referent pairs were indirectly learned, and retention trials which probed their recognition of the newly-learned word–referent pairs. In Experiment 1 (n = 48), lip movements were present during familiar and disambiguation trials, but not retention trials. In Experiment 2 (n = 48), lip movements were present during all three trial types. In Experiment 3 (bilinguals only, n = 24), a still face with no lip movements was present in all three trial types. While toddlers succeeded in the familiar and disambiguation trials of every experiment, success in the retention trials was only found in Experiment 2. This work suggests that the extra-linguistic support provided by lip movements improved the learning and recognition of the novel words.
Collapse
Affiliation(s)
- Drew Weatherhead
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, NS B3H 4R2, Canada
- Correspondence:
| | - Maria M. Arredondo
- Department of Human Development and Family Sciences, University of Texas at Austin, Austin, TX 78705, USA;
| | - Loreto Nácar Garcia
- Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (L.N.G.); (J.F.W.)
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (L.N.G.); (J.F.W.)
| |
Collapse
|
67
|
Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain 2021; 14:13. [PMID: 33446258 PMCID: PMC7809823 DOI: 10.1186/s13041-021-00732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.
Collapse
Affiliation(s)
- Mengyao Zheng
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853 USA
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| |
Collapse
|
68
|
Paraskevopoulos E, Chalas N, Karagiorgis A, Karagianni M, Styliadis C, Papadelis G, Bamidis P. Aging Effects on the Neuroplastic Attributes of Multisensory Cortical Networks as Triggered by a Computerized Music Reading Training Intervention. Cereb Cortex 2021; 31:123-137. [PMID: 32794571 DOI: 10.1093/cercor/bhaa213] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 07/08/2020] [Accepted: 07/13/2020] [Indexed: 12/24/2022] Open
Abstract
The constant increase in the graying population is the result of a great expansion of life expectancy. A smaller expansion of healthy cognitive and brain functioning diminishes the gains achieved by longevity. Music training, as a special case of multisensory learning, may induce restorative neuroplasticity in older ages. The current study aimed to explore aging effects on the cortical network supporting multisensory cognition and to define aging effects on the network's neuroplastic attributes. A computer-based music reading protocol was developed and evaluated via electroencephalography measurements pre- and post-training on young and older adults. Results revealed that multisensory integration is performed via diverse strategies in the two groups: Older adults employ higher-order supramodal areas to a greater extent than lower level perceptual regions, in contrast to younger adults, indicating an age-related shift in the weight of each processing strategy. Restorative neuroplasticity was revealed in the left inferior frontal gyrus and right medial temporal gyrus, as a result of the training, while task-related reorganization of cortical connectivity was obstructed in the group of older adults, probably due to systemic maturation mechanisms. On the contrary, younger adults significantly increased functional connectivity among the regions supporting multisensory integration.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Nikolas Chalas
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Institute for Biomagnetism and Biosignal Analysis, University of Münster, D-48149 Münster, Germany
| | - Alexandros Karagiorgis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Charis Styliadis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
69
|
Improvements and Degradation to Spatial Tactile Acuity Among Blind and Deaf Individuals. Neuroscience 2020; 451:51-59. [PMID: 33065233 DOI: 10.1016/j.neuroscience.2020.10.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 09/24/2020] [Accepted: 10/01/2020] [Indexed: 11/22/2022]
Abstract
Cross-modal reorganization takes place for sensory cortices when there is no more primary input. For instance, the visual cortex in blind individuals which receives no visual input starts responding to auditory and tactile stimuli. Reorganization may improve or degrade processing of other modality inputs, via bottom-up compensational processes and top-down updating. In two experiments, we measured the spatial tactile response in a large sample of early- (N = 49) and late-blind (N = 51) individuals with varying levels of Braille proficiencies, and early-deaf (N = 69) with varying levels of hearing devices against separate hearing and sighted controls. Spatial tactile responses were measured using a standard gradient orientation task on two locations, the finger and tongue. Experiments show limited to no advantage in passive tactile response for blind individuals and degradation for deaf individuals at the finger. However, the use of hearing devices decreased the tactile impairment in early-deaf individuals. Also, no differences in age-related decline in both sensory-impaired groups were shown. Results show less tactile acuity differences between blind and sighted than previously reported, but supports recent reports of tactile impairment among the early-deaf.
Collapse
|
70
|
O'Brien JM, Chan JS, Setti A. Audio-Visual Training in Older Adults: 2-Interval-Forced Choice Task Improves Performance. Front Neurosci 2020; 14:569212. [PMID: 33304234 PMCID: PMC7693639 DOI: 10.3389/fnins.2020.569212] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/08/2020] [Indexed: 12/12/2022] Open
Abstract
A growing interest in ameliorating multisensory perception deficits in older adults arises from recent evidence showing that impaired multisensory processing, particularly in the temporal domain, may be associated with cognitive and functional impairments. Perceptual training has proved successful in improving multisensory temporal processing in young adults, but few studies have investigated this training approach in older adults. In the present study we used a simultaneity (or synchronicity) judgement task with feedback, to train the audio-visual abilities of community-dwelling, cognitively healthy older adults. We recruited 23 older adults (M = 74.17, SD = 6.23) and a group of 20 young adults (M = 24.20, SD = 4.23) who served as a comparison. Participants were tested before and after perceptual training using a 2-Interval Forced Choice Task (2-IFC); and the Sound-Induced Flash Illusion (SIFI). After 3 days of training, participants improved on the 2-IFC task, with a significant narrowing of the temporal window of integration (TWI) found for both groups. Generalization of training effects was not found, with no post-training differences in perceptual sensitivity to the SIFI for either group. These findings provide evidence perceptual narrowing can be achieved in older as well as younger adults after 3 days of perceptual training. These results provide useful information for future studies attempting to improve audio-visual temporal discrimination abilities in older people.
Collapse
Affiliation(s)
- Jessica M O'Brien
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Jason S Chan
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Annalisa Setti
- School of Applied Psychology, University College Cork, Cork, Ireland
| |
Collapse
|
71
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
72
|
Scurry AN, Chifamba K, Jiang F. Electrophysiological Dynamics of Visual-Tactile Temporal Order Perception in Early Deaf Adults. Front Neurosci 2020; 14:544472. [PMID: 33071731 PMCID: PMC7539666 DOI: 10.3389/fnins.2020.544472] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 08/19/2020] [Indexed: 11/17/2022] Open
Abstract
Studies of compensatory plasticity in early deaf (ED) individuals have mainly focused on unisensory processing, and on spatial rather than temporal coding. However, precise discrimination of the temporal relationship between stimuli is imperative for successful perception of and interaction with the complex, multimodal environment. Although the properties of cross-modal temporal processing have been extensively studied in neurotypical populations, remarkably little is known about how the loss of one sense impacts the integrity of temporal interactions among the remaining senses. To understand how auditory deprivation affects multisensory temporal interactions, ED and age-matched normal hearing (NH) controls performed a visual-tactile temporal order judgment task in which visual and tactile stimuli were separated by varying stimulus onset asynchronies (SOAs) and subjects had to discern the leading stimulus. Participants performed the task while EEG data were recorded. Group averaged event-related potential waveforms were compared between groups in occipital and fronto-central electrodes. Despite similar temporal order sensitivities and performance accuracy, ED had larger visual P100 amplitudes for all SOA levels and larger tactile N140 amplitudes for the shortest asynchronous (± 30 ms) and synchronous SOA levels. The enhanced signal strength reflected in these components from ED adults are discussed in terms of compensatory recruitment of cortical areas for visual-tactile processing. In addition, ED adults had similar tactile P200 amplitudes as NH but longer P200 latencies suggesting reduced efficiency in later processing of tactile information. Overall, these results suggest that greater responses by ED for early processing of visual and tactile signals are likely critical for maintained performance in visual-tactile temporal order discrimination.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Kudzai Chifamba
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
73
|
Linde J, Zimmer-Bensch G. DNA Methylation-Dependent Dysregulation of GABAergic Interneuron Functionality in Neuropsychiatric Diseases. Front Neurosci 2020; 14:586133. [PMID: 33041771 PMCID: PMC7525021 DOI: 10.3389/fnins.2020.586133] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 08/25/2020] [Indexed: 12/30/2022] Open
Abstract
Neuropsychiatric diseases, such as mood disorders, schizophrenia, and autism, represent multifactorial disorders, differing in causes, disease onset, severity, and symptoms. A common feature of numerous neuropsychiatric conditions are defects in the cortical inhibitory GABAergic system. The balance of excitation and inhibition is fundamental for proper and efficient information processing in the cerebral cortex. Thus, altered inhibition is suggested to account for pathological symptoms like cognitive impairments and dysfunctional multisensory integration. While it became apparent that most of these diseases have a clear genetic component, environmental influences emerged as an impact of disease manifestation, onset, and severity. Epigenetic mechanisms of transcriptional control, such as DNA methylation, are known to be responsive to external stimuli, and are suspected to be implicated in the functional impairments of GABAergic interneurons, and hence, the pathophysiology of neuropsychiatric diseases. Here, we provide an overview about the multifaceted functional implications of DNA methylation and DNA methyltransferases in cortical interneuron development and function in health and disease. Apart from the regulation of gamma-aminobutyric acid-related genes and genes relevant for interneuron development, we discuss the role of DNA methylation-dependent regulation of synaptic transmission by the modulation of endocytosis-related genes as potential pathophysiological mechanisms underlying neuropsychiatric conditions. Deciphering the hierarchy and mechanisms of changes in epigenetic signatures is crucial to develop effective strategies for treatment and prevention.
Collapse
Affiliation(s)
- Jenice Linde
- Division of Functional Epigenetics in the Animal Model, Institute for Biology II, RWTH Aachen University, Aachen, Germany.,Research Training Group 2416 MultiSenses - MultiScales, RWTH Aachen University, Aachen, Germany
| | - Geraldine Zimmer-Bensch
- Division of Functional Epigenetics in the Animal Model, Institute for Biology II, RWTH Aachen University, Aachen, Germany.,Research Training Group 2416 MultiSenses - MultiScales, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
74
|
Maitre NL, Key AP, Slaughter JC, Yoder PJ, Neel ML, Richard C, Wallace MT, Murray MM. Neonatal Multisensory Processing in Preterm and Term Infants Predicts Sensory Reactivity and Internalizing Tendencies in Early Childhood. Brain Topogr 2020; 33:586-599. [PMID: 32785800 PMCID: PMC7429553 DOI: 10.1007/s10548-020-00791-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 07/13/2020] [Indexed: 12/22/2022]
Abstract
Multisensory processes include the capacity to combine information from the different senses, often improving stimulus representations and behavior. The extent to which multisensory processes are an innate capacity or instead require experience with environmental stimuli remains debated. We addressed this knowledge gap by studying multisensory processes in prematurely born and full-term infants. We recorded 128-channel event-related potentials (ERPs) from a cohort of 55 full-term and 61 preterm neonates (at an equivalent gestational age) in response to auditory, somatosensory, and combined auditory-somatosensory multisensory stimuli. Data were analyzed within an electrical neuroimaging framework, involving unsupervised topographic clustering of the ERP data. Multisensory processing in full-term infants was characterized by a simple linear summation of responses to auditory and somatosensory stimuli alone, which furthermore shared common ERP topographic features. We refer to the ERP topography observed in full-term infants as "typical infantile processing" (TIP). In stark contrast, preterm infants exhibited non-linear responses and topographies less-often characterized by TIP; there were distinct patterns of ERP topographies to multisensory and summed unisensory conditions. We further observed that the better TIP characterized an infant's ERPs, independently of prematurity, the more typical was the score on the Infant/Toddler Sensory Profile (ITSP) at 12 months of age and the less likely was the child to the show internalizing tendencies at 24 months of age. Collectively, these results highlight striking differences in the brain's responses to multisensory stimuli in children born prematurely; differences that relate to later sensory and internalizing functions.
Collapse
Affiliation(s)
- Nathalie L Maitre
- Center for Perinatal Research at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pediatrics, Nationwide Children's Hospital, 700 Children's Way, Columbus, OH, 43205, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - James C Slaughter
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Paul J Yoder
- Department of Special Education, Peabody College of Education and Human Development, Vanderbilt University, Nashville, TN, USA
| | - Mary Lauren Neel
- Center for Perinatal Research at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - Céline Richard
- Center for Perinatal Research at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Micah M Murray
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.
- Sensory, Perceptual, and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland.
| |
Collapse
|
75
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
76
|
The potential effects of NICU environment and multisensory stimulation in prematurity. Pediatr Res 2020; 88:161-162. [PMID: 31901220 DOI: 10.1038/s41390-019-0738-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 12/11/2019] [Accepted: 12/15/2019] [Indexed: 12/20/2022]
|
77
|
Battaglini L, Mena F, Ghiani A, Casco C, Melcher D, Ronconi L. The Effect of Alpha tACS on the Temporal Resolution of Visual Perception. Front Psychol 2020; 11:1765. [PMID: 32849045 PMCID: PMC7412991 DOI: 10.3389/fpsyg.2020.01765] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 06/26/2020] [Indexed: 01/03/2023] Open
Abstract
We experience the world around us as a smooth and continuous flow. However, there is growing evidence that the stream of sensory inputs is not elaborated in an analog way but is instead organized in discrete or quasi-discrete temporal processing windows. These discrete windows are suggested to depend on rhythmic neural activity in the alpha (and theta) frequency bands, which in turn reflect changes in neural activity within, and coupling between, cortical areas. In the present study, we investigated a possible causal link between oscillatory brain activity in the alpha range (8-12 Hz) and the temporal resolution of visual perception, which determines whether sequential stimuli are perceived as distinct entities or combined into a single representation. To this aim, we employed a two-flash fusion task while participants received focal transcranial alternating current stimulation (tACS) in extra-striate visual regions including V5/MT of the right hemisphere. Our findings show that 10-Hz tACS, as opposed to a placebo (sham tACS), reduces the temporal resolution of perception, inducing participants to integrate the two stimuli into a unique percept more often. This pattern was observed only in the contralateral visual hemifield, providing further support for a specific effect of alpha tACS. The present findings corroborate the idea of a causal link between temporal windows of integration/segregation and oscillatory alpha activity in V5/MT and extra-striate visual regions. They also stimulate future research on possible ways to shape the temporal resolution of human vision in an individualized manner.
Collapse
Affiliation(s)
- Luca Battaglini
- Department of General Psychology, University of Padua, Padua, Italy.,Neuro.Vis. U.S. Laboratory, University of Padua, Padua, Italy
| | - Federica Mena
- Department of General Psychology, University of Padua, Padua, Italy
| | - Andrea Ghiani
- Department of General Psychology, University of Padua, Padua, Italy
| | - Clara Casco
- Department of General Psychology, University of Padua, Padua, Italy.,Neuro.Vis. U.S. Laboratory, University of Padua, Padua, Italy
| | - David Melcher
- Center for Mind/Brain Sciences, Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
78
|
Wang J, Zhu Y, Chen Y, Mamat A, Yu M, Zhang J, Dang J. An Eye-Tracking Study on Audiovisual Speech Perception Strategies Adopted by Normal-Hearing and Deaf Adults Under Different Language Familiarities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2245-2254. [PMID: 32579867 DOI: 10.1044/2020_jslhr-19-00223] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur-Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.
Collapse
Affiliation(s)
- Jianrong Wang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Yumeng Zhu
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Yu Chen
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- Technical College for the Deaf, Tianjin University of Technology, China
| | - Abdilbar Mamat
- Institute of Physical Education, Hotan Teacher's College, China
| | - Mei Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Ju Zhang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| |
Collapse
|
79
|
Shapiro L, Bell K, Dhas K, Branson T, Louw G, Keenan ID. Focused Multisensory Anatomy Observation and Drawing for Enhancing Social Learning and Three-Dimensional Spatial Understanding. ANATOMICAL SCIENCES EDUCATION 2020; 13:488-503. [PMID: 31705741 DOI: 10.1002/ase.1929] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 10/08/2019] [Accepted: 11/03/2019] [Indexed: 06/10/2023]
Abstract
The concept that multisensory observation and drawing can be effective for enhancing anatomy learning is supported by pedagogic research and theory, and theories of drawing. A haptico-visual observation and drawing (HVOD) process has been previously introduced to support understanding of the three-dimensional (3D) spatial form of anatomical structures. The HVOD process involves exploration of 3D anatomy with the combined use of touch and sight, and the simultaneous act of making graphite marks on paper which correspond to the anatomy under observation. Findings from a previous study suggest that HVOD can increase perceptual understanding of anatomy through memorization and recall of the 3D form of observed structures. Here, additional pedagogic and cognitive underpinnings are presented to further demonstrate how and why HVOD can be effective for anatomy learning. Delivery of a HVOD workshop is described as a detailed guide for instructors, and themes arising from a phenomenological study of educator experiences of the HVOD process are presented. Findings indicate that HVOD can provide an engaging approach for the spatial exploration of anatomy within a supportive social learning environment, but also requires modification for effective curricular integration. Consequently, based on the most effective research-informed, theoretical, and logistical elements of art-based approaches in anatomy learning, including the framework provided by the observe-reflect-draw-edit-repeat (ORDER) method, an optimized "ORDER Touch" observation and drawing process has been developed. This is with the aim of providing a widely accessible resource for supporting social learning and 3D spatial understanding of anatomy, in addition to improving specific anatomical knowledge.
Collapse
Affiliation(s)
- Leonard Shapiro
- Department of Human Biology, University of Cape Town, Cape Town, Republic of South Africa
| | - Kathryn Bell
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
- Acute Medical Unit, James Cook University Hospital, Middlesbrough, United Kingdom
| | - Kallpana Dhas
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Toby Branson
- Department of Health and Medical Sciences, Adelaide Medical School, University of Adelaide, Adelaide, South Australia, Australia
| | - Graham Louw
- Department of Human Biology, University of Cape Town, Cape Town, Republic of South Africa
| | - Iain D Keenan
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
80
|
Tivadar RI, Gaglianese A, Murray MM. Auditory Enhancement of Illusory Contour Perception. Multisens Res 2020; 34:1-15. [PMID: 33706283 DOI: 10.1163/22134808-bja10018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 04/24/2020] [Indexed: 11/19/2022]
Abstract
Illusory contours (ICs) are borders that are perceived in the absence of contrast gradients. Until recently, IC processes were considered exclusively visual in nature and presumed to be unaffected by information from other senses. Electrophysiological data in humans indicates that sounds can enhance IC processes. Despite cross-modal enhancement being observed at the neurophysiological level, to date there is no evidence of direct amplification of behavioural performance in IC processing by sounds. We addressed this knowledge gap. Healthy adults ( n = 15) discriminated instances when inducers were arranged to form an IC from instances when no IC was formed (NC). Inducers were low-constrast and masked, and there was continuous background acoustic noise throughout a block of trials. On half of the trials, i.e., independently of IC vs NC, a 1000-Hz tone was presented synchronously with the inducer stimuli. Sound presence improved the accuracy of indicating when an IC was presented, but had no impact on performance with NC stimuli (significant IC presence/absence × Sound presence/absence interaction). There was no evidence that this was due to general alerting or to a speed-accuracy trade-off (no main effect of sound presence on accuracy rates and no comparable significant interaction on reaction times). Moreover, sound presence increased sensitivity and reduced bias on the IC vs NC discrimination task. These results demonstrate that multisensory processes augment mid-level visual functions, exemplified by IC processes. Aside from the impact on neurobiological and computational models of vision, our findings may prove clinically beneficial for low-vision or sight-restored patients.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland
| | - Anna Gaglianese
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,3Spinoza Centre for Neuroimaging, Amsterdam, The Netherlands
| | - Micah M Murray
- 1The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,2Department of Ophthalmology, University of LausanneandFondation Asile des aveugles, Lausanne, Switzerland.,4Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), University Hospital CenterandUniversity of Lausanne, 1011 Lausanne, Switzerland.,5Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
81
|
Feldman JI, Dunham K, Conrad JG, Simon DM, Cassidy M, Liu Y, Tu A, Broderick N, Wallace MT, Woynaroski TG. Plasticity of Temporal Binding in Children with Autism Spectrum Disorder:A Single Case Experimental Design Perceptual Training Study. RESEARCH IN AUTISM SPECTRUM DISORDERS 2020; 74:101555. [PMID: 32440308 PMCID: PMC7241431 DOI: 10.1016/j.rasd.2020.101555] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Many children with autism spectrum disorder (ASD) demonstrate atypical responses to multisensory stimuli. These disruptions, which are frequently seen in response to audiovisual speech, may produce cascading effects on the broader development of children with ASD. Perceptual training has been shown to enhance multisensory speech perception in typically developed adults. This study was the first to examine the effects of perceptual training on audiovisual speech perception in children with ASD. METHOD A multiple baseline across participants design was utilized with four 7- to 13-year-old children with ASD. The dependent variable, which was probed outside the training task each day using a simultaneity judgment task in baseline, intervention, and maintenance conditions, was audiovisual temporal binding window (TBW), an index of multisensory temporal acuity. During perceptual training, participants completed the same simultaneity judgment task with feedback on their accuracy after each trial in easy-, medium-, and hard-difficulty blocks. RESULTS A functional relation between the multisensory perceptual training program and TBW size was not observed. Of the three participants who were entered into training, one participant demonstrated a strong effect, characterized by a fairly immediate change in TBW trend. The two remaining participants demonstrated a less clear response (i.e., longer latency to effect, lack of functional independence). The first participant to enter the training condition demonstrated some maintenance of a narrower TBW post-training. CONCLUSIONS Results indicate TBWs in children with ASD may be malleable, but additional research is needed and may entail further adaptation to the multisensory perceptual training paradigm.
Collapse
Affiliation(s)
- Jacob I. Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN 37232
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
| | - Julie G. Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Illinois, Chicago, IL, USA
| | - David M. Simon
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: axialHealthcare, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Neill Broderick
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G. Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
82
|
Su X, Guo S, Tan T, Chen F. Generative Memory for Lifelong Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1884-1898. [PMID: 31395557 DOI: 10.1109/tnnls.2019.2927369] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Lifelong learning is a crucial issue in advanced artificial intelligence. It requires the learning system to learn and accumulate knowledge from sequential tasks. The learning system needs to deal with increasingly more domains and tasks. We consider that the key to an effective and efficient lifelong learning system is the ability to memorize and recall the learned knowledge using neural networks. Following this idea, we propose Generative Memory (GM) as a novel memory module, and the resulting lifelong learning system is referred to as the GM Net (GMNet). To make the GMNet feasible, we propose a novel learning mechanism, referred to as P -invariant learning method. It replaces the memory of the real data by a memory of the data distribution, which makes it possible for the learning system to accurately and continuously accumulate the learned experiences. We demonstrate that GMNet achieves the state-of-the-art performance on lifelong learning tasks.
Collapse
|
83
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
84
|
Ortiz T, Ortiz-Teran L, Turrero A, Poch-Broto J, de Erausquin GA. A N400 ERP Study in letter recognition after passive tactile stimulation training in blind children and sighted controls. Restor Neurol Neurosci 2020; 37:197-206. [PMID: 31227674 DOI: 10.3233/rnn-180838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND We previously demonstrated that using a sensory substitution device (SSD) for one week, tactile stimulation results in faster activation of lateral occipital complex in blind children than in seeing controls. OBJECTIVE We used long-term haptic tactile stimulation training with an SSD to test if it results in stable cross-modal reassignment of visual pathways after six months, to provide high level processing of tactile semantic content. METHODS We enrolled 12 blind and 12 sighted children. The SSD transforms images to a stimulation matrix in contact with the dominant hand. Subjects underwent twice-daily training sessions, 5 days/week for six months. Children were asked to describe line orientation, name letters, and read words. ERP sessions were performed at baseline and 6 months to analyze the N400 ERP component and reaction times (RT). N400 sources were estimated with Low Resolution Electromagnetic Tomography (LORETA). SPM8 was used to make population-level inferences. RESULTS We found no group differences in RTs, accuracy of identifications, N400 latencies or distributions with the line task at 1 week or at 6 months. RTs on the letter recognition task were also similar. After 6 months, behavioral training increased accurate letter identification in both seeing and blind children (Chi 2 = 11906.934, p = 0.000), but the increase was larger in blind children (Chi 2 = 8.272, p = 0.004). Behavioral training shifted peak N400 amplitude to left occipital and bilateral parietal cortices in blind children, but to left precentral and postcentral and bilateral occipital cortices in sighted controls. CONCLUSIONS Blind children learn to recognize SSD-delivered letters better than seeing controls and had greater N400 amplitude in the occipital region. To the best of our knowledge, our results provide the first published example of standard letter recognition (not Braille) by children with blindness using a tactile delivery system.
Collapse
Affiliation(s)
- Tomas Ortiz
- Department of Psychiatry, Faculty of Medicine Universidad Complutense, Madrid, Spain
| | - Laura Ortiz-Teran
- Department of Radiology, Gordon Center for Medical Imaging, Massachusetts General Hospital Harvard University, Boston, USA
| | - Agustin Turrero
- Department of Biostatistics, Faculty of Medicine Universidad Complutense, Madrid, Spain
| | - Joaquin Poch-Broto
- Department of Ear, Nose and Throat, Hospital Clínico Universitario San Carlos, Madrid, Spain
| | - Gabriel A de Erausquin
- Department of Psychiatry and Neurology, Institute of Neuroscience, University of Texas Rio Grande Valley School of Medicine, Harlingen, USA
| |
Collapse
|
85
|
Tivadar RI, Chappaz C, Anaflous F, Roche J, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects by the Visually-Impaired. Front Neurosci 2020; 14:197. [PMID: 32265628 PMCID: PMC7099598 DOI: 10.3389/fnins.2020.00197] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 02/24/2020] [Indexed: 11/18/2022] Open
Abstract
In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Fatima Anaflous
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
86
|
Keil J. Double Flash Illusions: Current Findings and Future Directions. Front Neurosci 2020; 14:298. [PMID: 32317920 PMCID: PMC7146460 DOI: 10.3389/fnins.2020.00298] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 03/16/2020] [Indexed: 11/29/2022] Open
Abstract
Twenty years ago, the first report on the sound-induced double flash illusion, a visual illusion induced by sound, was published. In this paradigm, participants are presented with different numbers of auditory and visual stimuli. In case of an incongruent number of auditory and visual stimuli, the influence of auditory information on visual perception can lead to the perception of the illusion. Thus, combining two auditory stimuli with one visual stimulus can induce the perception of two visual stimuli, the so-called fission illusion. Alternatively, combining one auditory stimulus with two visual stimuli can induce the perception of one visual stimulus, the so-called fusion illusion. Overall, current research shows that the illusion is a reliable indicator of multisensory integration. It has also been replicated using different stimulus combinations, such as visual and tactile stimuli. Importantly, the robustness of the illusion allows the widespread use for assessing multisensory integration across different groups of healthy participants and clinical populations and in various task setting. This review will give an overview of the experimental evidence supporting the illusion, the current state of research concerning the influence of cognitive processes on the illusion, the neural mechanisms underlying the illusion, and future research directions. Moreover, an exemplary experimental setup will be described with different options to examine perception, alongside code to test and replicate the illusion online or in the laboratory.
Collapse
Affiliation(s)
- Julian Keil
- Biological Psychology, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
| |
Collapse
|
87
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
88
|
Denervaud S, Gentaz E, Matusz PJ, Murray MM. Multisensory Gains in Simple Detection Predict Global Cognition in Schoolchildren. Sci Rep 2020; 10:1394. [PMID: 32019951 PMCID: PMC7000735 DOI: 10.1038/s41598-020-58329-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 01/14/2020] [Indexed: 11/08/2022] Open
Abstract
The capacity to integrate information from different senses is central for coherent perception across the lifespan from infancy onwards. Later in life, multisensory processes are related to cognitive functions, such as speech or social communication. During learning, multisensory processes can in fact enhance subsequent recognition memory for unisensory objects. These benefits can even be predicted; adults' recognition memory performance is shaped by earlier responses in the same task to multisensory - but not unisensory - information. Everyday environments where learning occurs, such as classrooms, are inherently multisensory in nature. Multisensory processes may therefore scaffold healthy cognitive development. Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level cognition that is present already in schoolchildren. Multiple regression analyses indicated that the extent to which a child (N = 68; aged 4.5-15years) exhibited multisensory benefits on a simple detection task not only predicted benefits on a continuous recognition task involving naturalistic objects (p = 0.009), even when controlling for age, but also the same relative multisensory benefit also predicted working memory scores (p = 0.023) and fluid intelligence scores (p = 0.033) as measured using age-standardised test batteries. By contrast, gains in unisensory detection did not show significant prediction of any of the above global cognition measures. Our findings show that low-level multisensory processes predict higher-order memory and cognition already during childhood, even if still subject to ongoing maturation. These results call for revision of traditional models of cognitive development (and likely also education) to account for the role of multisensory processing, while also opening exciting opportunities to facilitate early learning through multisensory programs. More generally, these data suggest that a simple detection task could provide direct insights into the integrity of global cognition in schoolchildren and could be further developed as a readily-implemented and cost-effective screening tool for neurodevelopmental disorders, particularly in cases when standard neuropsychological tests are infeasible or unavailable.
Collapse
Affiliation(s)
- Solange Denervaud
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
| | - Edouard Gentaz
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Geneva, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland.
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.
| |
Collapse
|
89
|
Wallace MT, Woynaroski TG, Stevenson RA. Multisensory Integration as a Window into Orderly and Disrupted Cognition and Communication. Annu Rev Psychol 2020; 71:193-219. [DOI: 10.1146/annurev-psych-010419-051112] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Collapse
Affiliation(s)
- Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, Tennessee 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Ryan A. Stevenson
- Departments of Psychology and Psychiatry and Program in Neuroscience, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
90
|
Zhou HY, Shi LJ, Yang HX, Cheung EFC, Chan RCK. Audiovisual temporal integration and rapid temporal recalibration in adolescents and adults: Age-related changes and its correlation with autistic traits. Autism Res 2019; 13:615-626. [PMID: 31808321 DOI: 10.1002/aur.2249] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/19/2019] [Indexed: 12/26/2022]
Abstract
Temporal structure is a key factor in determining the relatedness of multisensory stimuli. Stimuli that are close in time are more likely to be integrated into a unified perceptual representation. To investigate the age-related developmental differences in audiovisual temporal integration and rapid temporal recalibration, we administered simultaneity judgment (SJ) tasks to a group of adolescents (11-14 years) and young adults (18-28 years). No age-related changes were found in the width of the temporal binding window within which participants are highly likely to combine multisensory stimuli. The main distinction between adolescents and adults was audiovisual temporal recalibration. Although participants of both age groups could rapidly recalibrate based on the previous trial for speech stimuli (i.e., syllable utterances), only adults but not adolescents showed short-term recalibration for simple and non-speech stimuli. In both adolescents and adults, no significant correlation was found between audiovisual temporal integration ability and autistic or schizotypal traits. These findings provide new information on the developmental trajectory of basic multisensory function and may have implications for neurodevelopmental disorders (e.g., autism) with altered audiovisual temporal integration. Autism Res 2020, 13: 615-626. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Utilizing temporal cues to integrate and separate audiovisual information is a fundamental ability underlying higher order social communicative functions. This study examines the developmental changes of the ability to detect audiovisual asynchrony and rapidly adjust sensory decisions based on previous sensory input. In healthy adolescents and young adults, the correlation between autistic traits and audiovisual integration ability failed to reach a significant level. Therefore, more research is needed to examine whether impairment in basic sensory functions is correlated with broader autism phenotype in nonclinical populations. These results may help us understand altered multisensory integration in people with autism.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li-Juan Shi
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,School of Education, Hunan University of Science and Technology, Xiangtan, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
91
|
Peter MG, Porada DK, Regenbogen C, Olsson MJ, Lundström JN. Sensory loss enhances multisensory integration performance. Cortex 2019; 120:116-130. [DOI: 10.1016/j.cortex.2019.06.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 04/25/2019] [Accepted: 06/04/2019] [Indexed: 10/26/2022]
|
92
|
Cortical processes underlying the effects of static sound timing on perceived visual speed. Neuroimage 2019; 199:194-205. [DOI: 10.1016/j.neuroimage.2019.05.062] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/09/2019] [Accepted: 05/24/2019] [Indexed: 01/10/2023] Open
|
93
|
Adamson LB, Bakeman R, Suma K, Robins DL. Sharing sounds: The development of auditory joint engagement during early parent-child interaction. Dev Psychol 2019; 55:2491-2504. [PMID: 31524417 DOI: 10.1037/dev0000822] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Joint engagement-the sharing of events during social interactions-is an important context for early learning. To date, sharing topics that are only heard has not been systematically documented. To describe the development of auditory joint engagement, 48 child-parent dyads were observed 5 times from 12 to 30 months during seminaturalistic play. Reactions to 4 types of sounds-overheard speech about the child, instrumental music, animal calls, and mechanical noises-were observed before and as parents scaffolded shared listening and after the sound ceased. Before parents reacted, even 12-month-old infants readily alerted and oriented to the sounds; over time they increasingly tried to share new sounds with their parents. When parents then joined in sharing a sound, periods of auditory joint engagement often ensued, increasing from two thirds of 12-month observations to almost ceiling level at the 18- through 30-month observations. Overall, the developmental course and structure of auditory joint engagement and joint engagement with multimodal objects and events are remarkably similar. Symbol-infused auditory joint engagement occurred rarely at first but increased steadily. Children's labeling of the sound and parents' language scaffolding also increased linearly while child pointing toward it rose until 18 months and then declined. Future studies should address variations in the development of auditory joint engagement, whether autism spectrum disorder affects how toddlers share sounds, and the role auditory joint engagement may play in gestural and language development. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
94
|
Abstract
When repeatedly exposed to simultaneously presented stimuli, associations between these stimuli are nearly always established, both within as well as between sensory modalities. Such associations guide our subsequent actions and may also play a role in multisensory selection. Thus, crossmodal associations (i.e., associations between stimuli from different modalities) learned in a multisensory interference task might affect subsequent information processing. The aim of this study was to investigate the processing level of multisensory stimuli in multisensory selection by means of crossmodal aftereffects. Either feature or response associations were induced in a multisensory flanker task while the amount of interference in a subsequent crossmodal flanker task was measured. The results of Experiment 1 revealed the existence of crossmodal interference after multisensory selection. Experiments 2 and 3 then went on to demonstrate the dependence of this effect on the perceptual associations between features themselves, rather than on the associations between feature and response. Establishing response associations did not lead to a subsequent crossmodal interference effect (Experiment 2), while stimulus feature associations without response associations (obtained by changing the response effectors) did (Experiment 3). Taken together, this pattern of results suggests that associations in multisensory selection, and the interference of (crossmodal) distractors, predominantly work at the perceptual, rather than at the response, level.
Collapse
|
95
|
Quercia P, Pozzo T, Marino A, Guillemant AL, Cappe C, Gueugneau N. Alteration in binocular fusion modifies audiovisual integration in children. Clin Ophthalmol 2019; 13:1137-1145. [PMID: 31308621 PMCID: PMC6613607 DOI: 10.2147/opth.s201747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 05/08/2019] [Indexed: 11/27/2022] Open
Abstract
Background: In the field of multisensory integration, vision is generally thought to dominate audiovisual interactions, at least in spatial tasks, but the role of binocular fusion in audiovisual integration has not yet been studied. Methods: Using the Maddox test, a classical ophthalmological test used to subjectively detect a latent unilateral eye deviation, we checked whether an alteration in binocular vision in young patients would be able to change audiovisual integration. The study was performed on a group of ten children (five males and five females aged 11.3±1.6 years) with normal binocular vision, and revealed a visual phenomenon consisting of stochastic disappearanceof part of a visual scene caused by auditory stimulation. Results: Indeed, during the Maddox test, brief sounds induced transient visual scotomas (VSs) in the visual field of the eye in front of where the Maddox rod was placed. We found a significant correlation between the modification of binocular vision and VS occurrence. No significant difference was detected in the percentage or location of VS occurrence between the right and left eye using the Maddox rod test orbetween sound frequencies. Conclusion: The results indicate a specific role of the oculomotor system in audiovisual integration in children. This convenient protocol may also have significant interest for clinical investigations of developmental pathologies where relationships between vision and hearing are specifically affected.
Collapse
Affiliation(s)
- P Quercia
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| | - T Pozzo
- IIT@UniFe Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - A Marino
- Private office, Vicenza 36100, Italy
| | - A L Guillemant
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| | - C Cappe
- Brain and Cognition Research Center, CerCo, Toulouse, France
| | - N Gueugneau
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| |
Collapse
|
96
|
Freschl J, Melcher D, Kaldy Z, Blaser E. Visual temporal integration windows are adult-like in 5- to 7-year-old children. J Vis 2019; 19:5. [PMID: 31287859 PMCID: PMC6892607 DOI: 10.1167/19.7.5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 06/02/2019] [Indexed: 11/24/2022] Open
Abstract
The visual system must organize dynamic input into useful percepts across time, balancing between stability and sensitivity to change. The temporal integration window (TIW) has been hypothesized to underlie this balance: If two or more stimuli fall within the same TIW, they are integrated into a single percept; those that fall in different windows are segmented (Arnett & Di Lollo, 1979; Wutz, Muschter, van Koningsbruggen, Weisz, & Melcher, 2016). Visual TIWs have been studied in adults, showing average windows of 65 ms (Wutz et al., 2016); however, it is unclear how windows develop through early childhood. Here we measured TIWs in 5- to 7-year-old children and adults, using a variant of the missing dot task (Di Lollo, 1980; Wutz et al. 2016), in which integration and segmentation thresholds were measured within the same participant, using the same stimuli. Participants saw a sequence of two displays separated by an interstimulus interval (ISI) that determined the visibility of a visual search target. Longer ISIs increased the likelihood of detecting a segmentation target (but decreased detection for the integration target) although shorter ISIs increased the likelihood of detecting the integration target (but decreased detection of the segmentation target). We could then estimate the TIW by measuring the point at which these two functions intersect. Children's TIWs (M = 68 ms) were comparable to adults' (M = 73 ms) with no appreciable age trend within our sample, indicating that TIWs reach adult levels by approximately 5 years of age.
Collapse
Affiliation(s)
- Julie Freschl
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - David Melcher
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy
| | - Zsuzsa Kaldy
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
97
|
Continual lifelong learning with neural networks: A review. Neural Netw 2019; 113:54-71. [DOI: 10.1016/j.neunet.2019.01.012] [Citation(s) in RCA: 322] [Impact Index Per Article: 64.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Revised: 01/18/2019] [Accepted: 01/22/2019] [Indexed: 10/27/2022]
|
98
|
Casartelli L. Stability and flexibility in multisensory sampling: insights from perceptual illusions. J Neurophysiol 2019; 121:1588-1590. [PMID: 30840541 DOI: 10.1152/jn.00060.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.
Collapse
Affiliation(s)
- Luca Casartelli
- Scientific Institute IRCCS E. Medea, Child Psychopathology Unit, Bosisio Parini, Italy
| |
Collapse
|
99
|
Abstract
Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.
Collapse
Affiliation(s)
- Pawel J Matusz
- University Hospital Center and University of Lausanne
- University of Applied Sciences Western Switzerland (HES SO Valais)
| | | | | | | |
Collapse
|
100
|
Matusz PJ, Turoman N, Tivadar RI, Retsa C, Murray MM. Brain and Cognitive Mechanisms of Top–Down Attentional Control in a Multisensory World: Benefits of Electrical Neuroimaging. J Cogn Neurosci 2019; 31:412-430. [DOI: 10.1162/jocn_a_01360] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top–down audiovisual object representations (“attentional templates”) while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via “the N2pc component”: spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150–300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top–down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170–270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top–down object representations, and so not only by separate sensory-specific top–down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top–down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.
Collapse
Affiliation(s)
- Pawel J. Matusz
- University of Applied Sciences Western Switzerland (HES-SO Valais)
- University Hospital Centre and University of Lausanne
- Vanderbilt University, Nashville, TN
| | - Nora Turoman
- University Hospital Centre and University of Lausanne
| | - Ruxandra I. Tivadar
- University Hospital Centre and University of Lausanne
- University of Lausanne and Fondation Asile des Aveugles
| | - Chrysa Retsa
- University Hospital Centre and University of Lausanne
| | - Micah M. Murray
- University Hospital Centre and University of Lausanne
- Vanderbilt University, Nashville, TN
- University of Lausanne and Fondation Asile des Aveugles
| |
Collapse
|