1
|
Dwyer P, Takarae Y, Zadeh I, Rivera SM, Saron CD. Multisensory integration and interactions across vision, hearing, and somatosensation in autism spectrum development and typical development. Neuropsychologia 2022; 175:108340. [PMID: 36028085 DOI: 10.1016/j.neuropsychologia.2022.108340] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 06/13/2022] [Accepted: 07/22/2022] [Indexed: 10/15/2022]
Abstract
Most prior studies of multisensory integration (MSI) in autism have measured MSI in only a single combination of modalities - typically audiovisual integration. The present study used onset reaction times (RTs) and 125-channel electroencephalography (EEG) to examine different forms of bimodal and trimodal MSI based on combinations of auditory (noise burst), somatosensory (finger tap), and visual (flash) stimuli presented in a spatially-aligned manner using a custom desktop apparatus. A total of 36 autistic and 19 non-autistic adolescents between the ages of 11-14 participated. Significant RT multisensory facilitation relative to summed unisensory RT was observed in both groups, as were significant differences between summed unisensory and multisensory ERPs. Although the present study's statistical approach was not intended to test effect latencies, these interactions may have begun as early as ∼45 ms, constituting "early" (<100 ms) MSI. RT and ERP measurements of MSI appeared independent of one another. Groups did not significantly differ in multisensory RT facilitation, but we found exploratory evidence of group differences in the magnitude of audiovisual interactions in ERPs. Future research should make greater efforts to explore MSI in under-represented populations, especially autistic people with intellectual disabilities and nonspeaking/minimally-verbal autistic people.
Collapse
Affiliation(s)
- Patrick Dwyer
- Department of Psychology, UC Davis, USA; Center for Mind and Brain, UC Davis, USA.
| | - Yukari Takarae
- Department of Neurosciences, UC San Diego, USA; Department of Psychology, San Diego State University, USA
| | | | - Susan M Rivera
- Department of Psychology, UC Davis, USA; Center for Mind and Brain, UC Davis, USA; MIND Institute, UC Davis, USA
| | - Clifford D Saron
- Center for Mind and Brain, UC Davis, USA; MIND Institute, UC Davis, USA
| |
Collapse
|
2
|
Melo M, Goncalves G, Monteiro P, Coelho H, Vasconcelos-Raposo J, Bessa M. Do Multisensory Stimuli Benefit the Virtual Reality Experience? A Systematic Review. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1428-1442. [PMID: 32746276 DOI: 10.1109/tvcg.2020.3010088] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The majority of virtual reality (VR) applications rely on audiovisual stimuli and do not exploit the addition of other sensory cues that could increase the potential of VR. This systematic review surveys the existing literature on multisensory VR and the impact of haptic, olfactory, and taste cues over audiovisual VR. The goal is to identify the extent to which multisensory stimuli affect the VR experience, which stimuli are used in multisensory VR, the type of VR setups used, and the application fields covered. An analysis of the 105 studies that met the eligibility criteria revealed that 84.8 percent of the studies show a positive impact of multisensory VR experiences. Haptics is the most commonly used stimulus in multisensory VR systems (86.6 percent). Non-immersive and immersive VR setups are preferred over semi-immersive setups. Regarding the application fields, a considerable part was adopted by health professionals and science and engineering professionals. We further conclude that smell and taste are still underexplored, and they can bring significant value to VR applications. More research is recommended on how to synthesize and deliver these stimuli, which still require complex and costly apparatus be integrated into the VR experience in a controlled and straightforward manner.
Collapse
|
3
|
Ashmaig O, Hamilton LS, Modur P, Buchanan RJ, Preston AR, Watrous AJ. A Platform for Cognitive Monitoring of Neurosurgical Patients During Hospitalization. Front Hum Neurosci 2021; 15:726998. [PMID: 34880738 PMCID: PMC8645698 DOI: 10.3389/fnhum.2021.726998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 10/29/2021] [Indexed: 12/02/2022] Open
Abstract
Intracranial recordings in epilepsy patients are increasingly utilized to gain insight into the electrophysiological mechanisms of human cognition. There are currently several practical limitations to conducting research with these patients, including patient and researcher availability and the cognitive abilities of patients, which limit the amount of task-related data that can be collected. Prior studies have synchronized clinical audio, video, and neural recordings to understand naturalistic behaviors, but these recordings are centered on the patient to understand their seizure semiology and thus do not capture and synchronize audiovisual stimuli experienced by patients. Here, we describe a platform for cognitive monitoring of neurosurgical patients during their hospitalization that benefits both patients and researchers. We provide the full specifications for this system and describe some example use cases in perception, memory, and sleep research. We provide results obtained from a patient passively watching TV as proof-of-principle for the naturalistic study of cognition. Our system opens up new avenues to collect more data per patient using real-world behaviors, affording new possibilities to conduct longitudinal studies of the electrophysiological basis of human cognition under naturalistic conditions.
Collapse
Affiliation(s)
- Omer Ashmaig
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX, United States
| | - Liberty S. Hamilton
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, United States
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, United States
| | - Pradeep Modur
- Seton Brain and Spine Institute, Division of Neurosurgery, Austin, TX, United States
| | - Robert J. Buchanan
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
- Seton Brain and Spine Institute, Division of Neurosurgery, Austin, TX, United States
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
- Department of Psychiatry and Behavioral Sciences, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
| | - Alison R. Preston
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX, United States
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, United States
- Department of Psychology, The University of Texas at Austin, Austin, TX, United States
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, United States
| | - Andrew J. Watrous
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, United States
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX, United States
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, United States
- Department of Psychology, The University of Texas at Austin, Austin, TX, United States
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
4
|
Rau PLP, Zheng J, Wang L, Zhao J, Wang D. Haptic and Auditory-Haptic Attentional Blink in Spatial and Object-Based Tasks. Multisens Res 2020; 33:295-312. [PMID: 31883506 DOI: 10.1163/22134808-20191483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 10/14/2019] [Indexed: 11/19/2022]
Abstract
Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory-haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object-spatial tasks, but not in spatial-object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory-haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.
Collapse
Affiliation(s)
| | - Jian Zheng
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Lijun Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Jingyu Zhao
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Dangxiao Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China.,3Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China.,4Peng Cheng Laboratory (PCL), Shenzhen, Guangdong Province, China
| |
Collapse
|
5
|
Sosso FAE, Kuss DJ, Vandelanotte C, Jasso-Medrano JL, Husain ME, Curcio G, Papadopoulos D, Aseem A, Bhati P, Lopez-Rosales F, Becerra JR, D'Aurizio G, Mansouri H, Khoury T, Campbell M, Toth AJ. Insomnia, sleepiness, anxiety and depression among different types of gamers in African countries. Sci Rep 2020; 10:1937. [PMID: 32029773 PMCID: PMC7005289 DOI: 10.1038/s41598-020-58462-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 01/15/2020] [Indexed: 11/24/2022] Open
Abstract
Gaming has increasingly become a part of life in Africa. Currently, no data on gaming disorders or their association with mental disorders exist for African countries. This study for the first time investigated (1) the prevalence of insomnia, excessive daytime sleepiness, anxiety and depression among African gamers, (2) the association between these conditions and gamer types (i.e., non-problematic, engaged, problematic and addicted) and (3) the predictive power of socioeconomic markers (education, age, income, marital status, employment status) on these conditions. 10,566 people from 2 low- (Rwanda, Gabon), 6 lower-middle (Cameroon, Nigeria, Morocco, Tunisia, Senegal, Ivory Coast) and 1 upper-middle income countries (South Africa) completed online questionnaires containing validated measures on insomnia, sleepiness, anxiety, depression and gaming addiction. Results showed our sample of gamers (24 ± 2.8 yrs; 88.64% Male), 30% were addicted, 30% were problematic, 8% were engaged and 32% were non-problematic. Gaming significantly contributed to 86.9% of the variance in insomnia, 82.7% of the variance in daytime sleepiness and 82.3% of the variance in anxiety [p < 0.001]. This study establishes the prevalence of gaming, mood and sleep disorders, in a large African sample. Our results corroborate previous studies, reporting problematic and addicted gamers show poorer health outcomes compared with non-problematic gamers.
Collapse
Affiliation(s)
- F A Etindele Sosso
- Center for Advanced Studies in Sleep Medicine, Hopital du Sacré-Coeur de Montreal, Research Center of Cognitive Neurosciences, Institut Santé et Société, Université du Québec à Montreal, Québec, Canada.
| | - D J Kuss
- School of Social Sciences, Department of Psychology, International Gaming Research Unit and the Cyberpsychology Group, Nottingham Trent University, Nottingham, UK
| | - C Vandelanotte
- School of Health, Medical and Applied Sciences, Physical Activity Research Group, Central Queensland University, Rockhampton, Australia
| | - J L Jasso-Medrano
- Center for Research in Nutrition and Public Health, Autonomous University of Nuevo Leon, Monterrey, N.L., Mexico
| | - M E Husain
- Centre for Physiotherapy and Rehabilitation Sciences, Jamia Millia Islamia, New Delhi, India
| | - G Curcio
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - D Papadopoulos
- Department of Pulmonology, Army Share Fund Hospital, Athens, Greece
| | - A Aseem
- Centre for Physiotherapy and Rehabilitation Sciences, Jamia Millia Islamia, New Delhi, India
| | - P Bhati
- Centre for Physiotherapy and Rehabilitation Sciences, Jamia Millia Islamia, New Delhi, India
| | - F Lopez-Rosales
- Innovation and Evaluation in Health Psychology, Faculty of Psychology, Autonomous University of Nuevo Leon, Monterrey, Nuevo Leon, Mexico
| | - J Ramon Becerra
- Innovation and Evaluation in Health Psychology, Faculty of Psychology, Autonomous University of Nuevo Leon, Monterrey, Nuevo Leon, Mexico
| | - G D'Aurizio
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - H Mansouri
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - T Khoury
- Department of Biomedical sciences, Faculty of Arts and Sciences, University of Montréal, Montréal, Québec, Canada
| | - M Campbell
- Department of Physical Education and Sport Sciences, University of Limerick, Limerick, Ireland
| | - A J Toth
- Lero Irish Software Research Centre, University of Limerick, Limerick, Ireland
| |
Collapse
|
6
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
7
|
|
8
|
I act, therefore I err: EEG correlates of success and failure in a virtual throwing game. Int J Psychophysiol 2017; 122:32-41. [DOI: 10.1016/j.ijpsycho.2017.02.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2015] [Revised: 02/08/2017] [Accepted: 02/09/2017] [Indexed: 11/20/2022]
|
9
|
Abstract
Integration of sensory information across modalities can confer behavioral advantages by decreasing perceptual ambiguity, increasing reaction time, and increasing detection accuracy relative to unisensory stimuli. We asked how combinations of auditory, visual, and somatosensory events alter response time. Participants detected stimulation on one side of space (right or left) while ignoring stimulation on the other side of space. There were seven types of suprathreshold stimuli: auditory (tones from speakers), visual (sinusoidal contrast gratings), somatosensory (fingertip vibrations), audio-visual, somato-visual, audio-somatosensory, and audio-somato-visual. Response enhancement and race model analysis confirmed that bisensory and trisensory trials enhanced response time relative to unisensory trials. Exploratory analysis of individual differences in intersensory facilitation revealed that participants fit into one of two groups: those who benefitted from trisensory information and those who did not.
Collapse
|
10
|
Dickerson K, Gerhardstein P, Moser A. The Role of the Human Mirror Neuron System in Supporting Communication in a Digital World. Front Psychol 2017; 8:698. [PMID: 28553240 PMCID: PMC5427119 DOI: 10.3389/fpsyg.2017.00698] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 04/21/2017] [Indexed: 11/13/2022] Open
Abstract
Humans use both verbal and non-verbal communication to interact with others and their environment and increasingly these interactions are occurring in a digital medium. Whether live or digital, learning to communicate requires overcoming the correspondence problem: There is no direct mapping, or correspondence between perceived and self-produced signals. Reconciliation of the differences between perceived and produced actions, including linguistic actions, is difficult and requires integration across multiple modalities and neuro-cognitive networks. Recent work on the neural substrates of social learning suggests that there may be a common mechanism underlying the perception-production cycle for verbal and non-verbal communication. The purpose of this paper is to review evidence supporting the link between verbal and non-verbal communications, and to extend the hMNS literature by proposing that recent advances in communication technology, which at times have had deleterious effects on behavioral and perceptual performance, may disrupt the success of the hMNS in supporting social interactions because these technologies are virtual and spatiotemporal distributed nature.
Collapse
Affiliation(s)
- Kelly Dickerson
- U.S. Army Research Laboratory, Human Research and Engineering, AberdeenMD, USA
| | | | - Alecia Moser
- Department of Psychology, Binghamton University, BinghamtonNY, USA
| |
Collapse
|
11
|
Bayesian-based integration of multisensory naturalistic perithreshold stimuli. Neuropsychologia 2016; 88:123-130. [DOI: 10.1016/j.neuropsychologia.2015.12.017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2015] [Revised: 10/13/2015] [Accepted: 12/14/2015] [Indexed: 02/05/2023]
|
12
|
Balkenius A, Balkenius C. Multimodal interaction in the insect brain. BMC Neurosci 2016; 17:29. [PMID: 27246183 PMCID: PMC4888552 DOI: 10.1186/s12868-016-0258-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Accepted: 05/11/2016] [Indexed: 11/22/2022] Open
Abstract
Background The magnitude of multimodal enhancement in the brain is believed to depend on the stimulus intensity and timing. Such an effect has been found in many species, but has not been previously investigated in insects. Results We investigated the responses to multimodal stimuli consisting of an odour and a colour in the antennal lobe and mushroom body of the moth Manduca sexta. The mushroom body shows enhanced responses for multimodal stimuli consisting of a general flower odour and a blue colour. No such effect was seen for a bergamot odour. The enhancement shows an inverse effectiveness where the responses to weaker multimodal stimuli are amplified more than those to stronger stimuli. Furthermore, the enhancement depends on the precise timing of the two stimulus components. Conclusions Insect multimodal processing show both the principle of inverse effectiveness and the existence of an optimal temporal window. Electronic supplementary material The online version of this article (doi:10.1186/s12868-016-0258-7) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Anna Balkenius
- Department of Plant Protection Biology, Swedish University of Agricultural Sciences, Box 102, 230 53, Alnarp, Sweden.
| | | |
Collapse
|
13
|
Buchs G, Maidenbaum S, Levy-Tzedek S, Amedi A. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach. Restor Neurol Neurosci 2016; 34:97-105. [PMID: 26518671 PMCID: PMC4927841 DOI: 10.3233/rnn-150592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience? METHODS We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. RESULTS After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). CONCLUSIONS These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
14
|
Pratt H, Bleich N, Mittelman N. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect. Brain Behav 2015; 5:e00407. [PMID: 26664791 PMCID: PMC4667754 DOI: 10.1002/brb3.407] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 08/26/2015] [Accepted: 09/07/2015] [Indexed: 12/04/2022] Open
Abstract
INTRODUCTION Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. METHODS Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. RESULTS Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. CONCLUSIONS The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Naomi Bleich
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Nomi Mittelman
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| |
Collapse
|
15
|
Pieszek M, Schröger E, Widmann A. Separate and concurrent symbolic predictions of sound features are processed differently. Front Psychol 2014; 5:1295. [PMID: 25477832 PMCID: PMC4235414 DOI: 10.3389/fpsyg.2014.01295] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 10/24/2014] [Indexed: 11/13/2022] Open
Abstract
The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain's error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e., when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Experiment 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Experiment 2), that is, the symbol predicted the sound's pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Experiment 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b).
Collapse
Affiliation(s)
- Marika Pieszek
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| | - Erich Schröger
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| | - Andreas Widmann
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| |
Collapse
|
16
|
Pomper U, Brincker J, Harwood J, Prikhodko I, Senkowski D. Taking a call is facilitated by the multisensory processing of smartphone vibrations, sounds, and flashes. PLoS One 2014; 9:e103238. [PMID: 25116195 PMCID: PMC4130528 DOI: 10.1371/journal.pone.0103238] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2014] [Accepted: 06/27/2014] [Indexed: 11/17/2022] Open
Abstract
Many electronic devices that we use in our daily lives provide inputs that need to be processed and integrated by our senses. For instance, ringing, vibrating, and flashing indicate incoming calls and messages in smartphones. Whether the presentation of multiple smartphone stimuli simultaneously provides an advantage over the processing of the same stimuli presented in isolation has not yet been investigated. In this behavioral study we examined multisensory processing between visual (V), tactile (T), and auditory (A) stimuli produced by a smartphone. Unisensory V, T, and A stimuli as well as VA, AT, VT, and trisensory VAT stimuli were presented in random order. Participants responded to any stimulus appearance by touching the smartphone screen using the stimulated hand (Experiment 1), or the non-stimulated hand (Experiment 2). We examined violations of the race model to test whether shorter response times to multisensory stimuli exceed probability summations of unisensory stimuli. Significant violations of the race model, indicative of multisensory processing, were found for VA stimuli in both experiments and for VT stimuli in Experiment 1. Across participants, the strength of this effect was not associated with prior learning experience and daily use of smartphones. This indicates that this integration effect, similar to what has been previously reported for the integration of semantically meaningless stimuli, could involve bottom-up driven multisensory processes. Our study demonstrates for the first time that multisensory processing of smartphone stimuli facilitates taking a call. Thus, research on multisensory integration should be taken into consideration when designing electronic devices such as smartphones.
Collapse
Affiliation(s)
- Ulrich Pomper
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Jana Brincker
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - James Harwood
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Ivan Prikhodko
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
17
|
Gruzelier J, Bamidis P, Pagani L, Reiner M, Ros T. Applied Neuroscience: Functional enhancement, prevention, characterisation and methodology. (Hosting the Society of Applied Neuroscience). Int J Psychophysiol 2014; 93:ix-xii. [DOI: 10.1016/s0167-8760(14)00129-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2012] [Revised: 11/27/2012] [Accepted: 12/12/2012] [Indexed: 10/25/2022]
|