26
|
Chai Y, Liu TT, Marrett S, Li L, Khojandi A, Handwerker DA, Alink A, Muckli L, Bandettini PA. Topographical and laminar distribution of audiovisual processing within human planum temporale. Prog Neurobiol 2021; 205:102121. [PMID: 34273456 DOI: 10.1016/j.pneurobio.2021.102121] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/20/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
The brain is capable of integrating signals from multiple sensory modalities. Such multisensory integration can occur in areas that are commonly considered unisensory, such as planum temporale (PT) representing the auditory association cortex. However, the roles of different afferents (feedforward vs. feedback) to PT in multisensory processing are not well understood. Our study aims to understand that by examining laminar activity patterns in different topographical subfields of human PT under unimodal and multisensory stimuli. To this end, we adopted an advanced mesoscopic (sub-millimeter) fMRI methodology at 7 T by acquiring BOLD (blood-oxygen-level-dependent contrast, which has higher sensitivity) and VAPER (integrated blood volume and perfusion contrast, which has superior laminar specificity) signal concurrently, and performed all analyses in native fMRI space benefiting from an identical acquisition between functional and anatomical images. We found a division of function between visual and auditory processing in PT and distinct feedback mechanisms in different subareas. Specifically, anterior PT was activated more by auditory inputs and received feedback modulation in superficial layers. This feedback depended on task performance and likely arose from top-down influences from higher-order multimodal areas. In contrast, posterior PT was preferentially activated by visual inputs and received visual feedback in both superficial and deep layers, which is likely projected directly from the early visual cortex. Together, these findings provide novel insights into the mechanism of multisensory interaction in human PT at the mesoscopic spatial scale.
Collapse
|
27
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal personality traits and multisensory integration: An investigation using the McGurk effect. Acta Psychol (Amst) 2021; 218:103354. [PMID: 34174491 DOI: 10.1016/j.actpsy.2021.103354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/14/2022] Open
Abstract
Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, in whom multisensory integration differences have been consistently found. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we used the McGurk task as a measure of multisensory integration. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with decreased multisensory integration of speech. Surprisingly, Unusual Perceptual Experiences were not associated with multisensory integration. However, Odd Speech was associated with multisensory integration, and this association extended more broadly across the Disorganized factor of the SPQ, including Odd or Eccentric Behaviour. Individuals with higher levels of Odd or Eccentric Behaviour scores also demonstrated poorer lip-reading abilities, which partially explained performance in the McGurk task. This suggests that aberrant perceptual processes affecting individuals across the schizophrenia spectrum may relate to disorganized symptomatology.
Collapse
|
28
|
Tierney A, Gomez JC, Fedele O, Kirkham NZ. Reading ability in children relates to rhythm perception across modalities. J Exp Child Psychol 2021; 210:105196. [PMID: 34090237 DOI: 10.1016/j.jecp.2021.105196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 03/02/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
The onset of reading ability is rife with individual differences, with some children termed "early readers" and some falling behind from the very beginning. Reading skill in children has been linked to an ability to remember nonverbal rhythms, specifically in the auditory modality. It has been hypothesized that the link between rhythm skills and reading reflects a shared reliance on the ability to extract temporal structure from sound. Here we tested this hypothesis by investigating whether the link between rhythm memory and reading depends on the modality in which rhythms are presented. We tested 75 primary school-aged children aged 7-11 years on a within-participants battery of reading and rhythm tasks. Participants received a reading efficiency task followed by three rhythm tasks (auditory, visual, and audiovisual). Results showed that children who performed poorly on the reading task also performed poorly on the tasks that required them to remember and repeat back nonverbal rhythms. In addition, these children showed a rhythmic deficit not just in the auditory domain but also in the visual domain. However, auditory rhythm memory explained additional variance in reading ability even after controlling for visual memory. These results suggest that reading ability and rhythm memory rely both on shared modality-general cognitive processes and on the ability to perceive the temporal structure of sound.
Collapse
|
29
|
Manfredi M, Cohn N, Ribeiro B, Sanchez Pinho P, Fernandes Rodrigues Pereira E, Boggio PS. The electrophysiology of audiovisual processing in visual narratives in adolescents with autism spectrum disorder. Brain Cogn 2021; 151:105730. [PMID: 33892434 DOI: 10.1016/j.bandc.2021.105730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 02/15/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022]
Abstract
We investigated the semantic processing of the multimodal audiovisual combination of visual narratives with auditory descriptive words and auditory sounds in individuals with ASD. To this aim, we recorded ERPs to critical auditory words and sounds associated with events in visual narrative that were either semantically congruent or incongruent with the climactic visual event. A similar N400 effect was found both in adolescents with ASD and neurotypical adolescents (ages 9-16) when accessing different types of auditory information (i.e. words and sounds) into a visual narrative. This result might suggest that verbal information processing in ASD adolescents could be facilitated by direct association with meaningful visual information. In addition, we observed differences in scalp distribution of later brain responses between ASD and neurotypical adolescents. This finding might suggest ASD adolescents differ from neurotypical adolescents during the processing of the multimodal combination of visual narratives with auditory information at later stages of the process. In conclusion, the semantic processing of verbal information, typically impaired in individuals with ASD, can be facilitated when embedded into a meaningful visual information.
Collapse
|
30
|
Neto LP, Godoy IRB, Yamada AF, Carrete H, Jasinowodolinski D, Skaf A. Evaluation of Audiovisual Reports to Enhance Traditional Emergency Musculoskeletal Radiology Reports. J Digit Imaging 2021; 32:1081-1088. [PMID: 31432299 DOI: 10.1007/s10278-019-00261-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Traditional radiology reports are narrative texts that include a description of imaging findings. Recent implementation of advanced reporting software allows for incorporation of annotated key images and hyperlinks directly into text reports, but these tools usually do not substitute in-person consultations with radiologists, especially in challenging cases. Use of on-demand audio/visual reports with screen capture software is an emerging technology, providing a more engaged imaging service. Our study evaluates a video reporting tool that utilizes PACS integrated screen capture software for musculoskeletal imaging studies in the emergency department. Our hypothesis is that referring orthopedic surgeons would find that recorded audio/video reports add value to conventional reports, may increase engagement with radiology staff, and also facilitate understanding of imaging findings from urgent musculoskeletal cases. Seven radiologists prepared a total of 47 audiovisual reports for 9 attending orthopedic surgeons from the emergency department. We applied two surveys to evaluate the experience of the referring physicians using audio/visual reports as a complementary material from the conventional text report. Positive responses were statistically significant in most questions including: if the clinical suspicion was answered in the video; willingness to use such technology in other cases; if the audiovisual report made the imaging findings more understandable than the traditional report; and if the audiovisual report is faster to understand than the traditional text report. Use of audiovisual reports in emergency musculoskeletal cases is a new approach to evaluate potentially challenging cases. These results support the potential of this technology to re-establish the radiologist's role as an essential member of patient care and also provide more engaging, precise, and personalized reports. Further studies could streamline these methods in order to minimize work redundancy with traditional text reporting or even evaluate acceptance of using only audiovisual radiology reports. Additionally, widespread adoption would require integration with the entire radiology workflow including non-urgent cases and other medical specialties.
Collapse
|
31
|
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
32
|
de Boer MJ, Jürgens T, Cornelissen FW, Başkent D. Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation. Vision Res 2020; 180:51-62. [PMID: 33360918 DOI: 10.1016/j.visres.2020.12.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 11/06/2020] [Accepted: 12/06/2020] [Indexed: 10/22/2022]
Abstract
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Collapse
|
33
|
Ainsworth K, Ostrolenk A, Irion C, Bertone A. Reduced multisensory facilitation exists at different periods of development in autism. Cortex 2020; 134:195-206. [PMID: 33291045 DOI: 10.1016/j.cortex.2020.09.031] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/21/2020] [Accepted: 09/07/2020] [Indexed: 12/16/2022]
Abstract
Atypical sensory processing is now recognised as a key component of an autism diagnosis. The integration of multiple sensory inputs (multisensory integration (MSI)) is thought to be idiosyncratic in autistic individuals and may have cascading effects on the development of higher-level skills such as social communication. Multisensory facilitation was assessed using a target detection paradigm in 45 autistic and 111 neurotypical individuals, matched on age and IQ. Target stimuli were: auditory (A; 3500 Hz tone), visual (V; white disk 'flash') or audiovisual (AV; simultaneous tone and flash), and were presented on a dark background in a randomized order with varying stimulus onset delays. Reaction time (RT) was recorded via button press. In order to assess possible developmental effects, participants were divided into younger (age 14 or younger) and older (age 15 and older) groups. Redundancy gain (RG) was significantly greater in neurotypical, compared to autistic individuals. No significant effect of age or interaction was found. Race model analysis was used to compute a bound value that represented the facilitation effect provided by MSI. Our results revealed that MSI facilitation occurred (violation of the race model) in neurotypical individuals, with more efficient MSI in older participants. In both the younger and older autistic groups, we found reduced MSI facilitation (no or limited violation of the race model). Autistic participants showed reduced multisensory facilitation compared to neurotypical participants in a simple target detection task, void of social context. This remained consistent across age. Our results support evidence that autistic individuals may not integrate low-level, non-social information in a typical fashion, adding to the growing discussion around the influential effect that basic perceptual atypicalities may have on the development of higher-level, core aspects of autism.
Collapse
|
34
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal traits are not related to multisensory integration or audiovisual speech perception. Conscious Cogn 2020; 86:103030. [PMID: 33120291 DOI: 10.1016/j.concog.2020.103030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 09/02/2020] [Accepted: 10/04/2020] [Indexed: 12/01/2022]
Abstract
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.
Collapse
|
35
|
Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: A causal inference explanation. Cortex 2020; 133:371-383. [PMID: 33221701 DOI: 10.1016/j.cortex.2020.10.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 08/05/2020] [Accepted: 10/05/2020] [Indexed: 11/25/2022]
Abstract
The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers. Second, across observers there is low correlation between McGurk susceptibility and recognition of visual speech paired with auditory speech-in-noise, another common measure of multisensory integration. Using the framework of the causal inference of multisensory speech (CIMS) model, we explored the relationship between the McGurk effect, syllable perception, and sentence perception in seven experiments with a total of 296 different participants. Perceptual reports revealed a relationship between the efficacy of different McGurk stimuli created from the same talker and perception of the auditory component of the McGurk stimuli presented in isolation, both with and without added noise. The CIMS model explained this strong stimulus-level correlation using the principles of noisy sensory encoding followed by optimal cue combination within a common representational space across speech types. Because the McGurk effect (but not speech-in-noise) requires the resolution of conflicting cues between modalities, there is an additional source of individual variability that can explain the weak observer-level correlation between McGurk and noisy speech. Power calculations show that detecting this weak correlation requires studies with many more participants than those conducted to-date. Perception of the McGurk effect and other types of speech can be explained by a common theoretical framework that includes causal inference, suggesting that the McGurk effect is a valid and useful experimental tool.
Collapse
|
36
|
Correlations Between Audiovisual Temporal Processing and Sensory Responsiveness in Adolescents with Autistic Traits. J Autism Dev Disord 2020; 51:2450-2460. [PMID: 32978707 DOI: 10.1007/s10803-020-04724-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Atypical sensory processing has recently gained much research interest as a key domain of autistic symptoms. Individuals with autism spectrum disorder (ASD) exhibit difficulties in processing the temporal aspects of sensory inputs, and show altered behavioural responses to sensory stimuli (i.e., sensory responsiveness). The present study examined the relation between sensory responsiveness (assessed by the Adult/Adolescent Sensory Profile) and audiovisual temporal integration (measured by unisensory temporal order judgement (TOJ) tasks and audiovisual simultaneity judgement (SJ) tasks) in typically-developing adolescents (n = 94). We found that adolescents with higher levels of autistic traits exhibited more difficulties in separating visual stimuli in time (i.e., larger visual TOJ threshold) and showed a stronger bias to perceive sound-leading audiovisual pairings as simultaneous. Regarding the associations between different measures of sensory function, reduced visual temporal acuity, but not auditory or multisensory temporal processing, was significantly correlated with more atypical patterns of sensory responsiveness. Furthermore, the positive correlation between visual TOJ thresholds and sensory avoidance was only found in adolescents with relatively high levels of autistic traits, but not in those with relatively low levels of autistic traits. These findings suggest that reduced visual temporal acuity may contribute to altered sensory experiences and may be linked to broader behavioural characteristics of ASD.
Collapse
|
37
|
Abstract
Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2-58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3-98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.
Collapse
|
38
|
Ullas S, Formisano E, Eisner F, Cutler A. Audiovisual and lexical cues do not additively enhance perceptual adaptation. Psychon Bull Rev 2020; 27:707-715. [PMID: 32319002 PMCID: PMC7398951 DOI: 10.3758/s13423-020-01728-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.
Collapse
|
39
|
Xu W, Kolozsvari OB, Oostenveld R, Hämäläinen JA. Rapid changes in brain activity during learning of grapheme-phoneme associations in adults. Neuroimage 2020; 220:117058. [PMID: 32561476 DOI: 10.1016/j.neuroimage.2020.117058] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 06/11/2020] [Accepted: 06/12/2020] [Indexed: 02/06/2023] Open
Abstract
Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 min; second day ~ 25 min), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior-temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.
Collapse
|
40
|
Proulx MJ, Brown DJ, Lloyd-Esenkaya T, Leveson JB, Todorov OS, Watson SH, de Sousa AA. Visual-to-auditory sensory substitution alters language asymmetry in both sighted novices and experienced visually impaired users. APPLIED ERGONOMICS 2020; 85:103072. [PMID: 32174360 DOI: 10.1016/j.apergo.2020.103072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 12/05/2019] [Accepted: 02/01/2020] [Indexed: 06/10/2023]
Abstract
Visual-to-auditory sensory substitution devices (SSDs) provide improved access to the visual environment for the visually impaired by converting images into auditory information. Research is lacking on the mechanisms involved in processing data that is perceived through one sensory modality, but directly associated with a source in a different sensory modality. This is important because SSDs that use auditory displays could involve binaural presentation requiring both ear canals, or monaural presentation requiring only one - but which ear would be ideal? SSDs may be similar to reading, as an image (printed word) is converted into sound (when read aloud). Reading, and language more generally, are typically lateralised to the left cerebral hemisphere. Yet, unlike symbolic written language, SSDs convert images to sound based on visuospatial properties, with the right cerebral hemisphere potentially having a role in processing such visuospatial data. Here we investigated whether there is a hemispheric bias in the processing of visual-to-auditory sensory substitution information and whether that varies as a function of experience and visual ability. We assessed the lateralization of auditory processing with two tests: a standard dichotic listening test and a novel dichotic listening test created using the auditory information produced by an SSD, The vOICe. Participants were tested either in the lab or online with the same stimuli. We did not find a hemispheric bias in the processing of visual-to-auditory information in visually impaired, experienced vOICe users. Further, we did not find any difference between visually impaired, experienced vOICe users and sighted novices in the hemispheric lateralization of visual-to-auditory information processing. Although standard dichotic listening is lateralised to the left hemisphere, the auditory processing of images in SSDs is bilateral, possibly due to the increased influence of right hemisphere processing. Auditory SSDs might therefore be equally effective with presentation to either ear if a monaural, rather than binaural, presentation were necessary.
Collapse
|
41
|
Broadbent H, Osborne T, Mareschal D, Kirkham N. Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-cross-sensory cues in children's incidental category learning. Cognition 2020; 199:104202. [PMID: 32087397 DOI: 10.1016/j.cognition.2020.104202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/09/2020] [Accepted: 01/22/2020] [Indexed: 10/25/2022]
Abstract
Simultaneous presentation of multisensory cues has been found to facilitate children's learning to a greater extent than unisensory cues (e.g., Broadbent, White, Mareschal, & Kirkham, 2017). Current research into children's multisensory learning, however, does not address whether these findings are due to having multiple cross-sensory cues that enhance stimuli perception or a matter of having multiple cues, regardless of modality, that are informative to category membership. The current study examined the role of multiple cross-sensory cues (e.g., audio-visual) compared to multiple intra-sensory cues (e.g., two visual cues) on children's incidental category learning. On a computerized incidental category learning task, children aged six to ten years (N = 454) were allocated to either a visual-only (V: unisensory), auditory-only (A: unisensory), audio-visual (AV: multisensory), Visual-Visual (VV: multi-cue) or Auditory-Auditory (AA: multi-cue) condition. In children over eight years of age, the availability of two informative cues, regardless of whether they had been presented across two different modalities or within the same modality, was found to be more beneficial to incidental learning than with unisensory cues. In six-year-olds, however, the presence of multiple auditory cues (AA) did not facilitate learning to the same extent as multiple visual cues (VV) or when cues were presented across two different modalities (AV). The findings suggest that multiple sensory cues presented across or within modalities may have differential effects on children's incidental learning across middle childhood, depending on the sensory domain in which they are presented. Implications for the use of multi-cross-sensory and multiple-intra-sensory cues for children's learning across this age range are discussed.
Collapse
|
42
|
La Rocca D, Ciuciu P, Engemann DA, van Wassenhove V. Emergence of β and γ networks following multisensory training. Neuroimage 2020; 206:116313. [PMID: 31676416 PMCID: PMC7355235 DOI: 10.1016/j.neuroimage.2019.116313] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/22/2019] [Accepted: 10/23/2019] [Indexed: 12/31/2022] Open
Abstract
Our perceptual reality relies on inferences about the causal structure of the world given by multiple sensory inputs. In ecological settings, multisensory events that cohere in time and space benefit inferential processes: hearing and seeing a speaker enhances speech comprehension, and the acoustic changes of flapping wings naturally pace the motion of a flock of birds. Here, we asked how a few minutes of (multi)sensory training could shape cortical interactions in a subsequent unisensory perceptual task. For this, we investigated oscillatory activity and functional connectivity as a function of individuals' sensory history during training. Human participants performed a visual motion coherence discrimination task while being recorded with magnetoencephalography. Three groups of participants performed the same task with visual stimuli only, while listening to acoustic textures temporally comodulated with the strength of visual motion coherence, or with auditory noise uncorrelated with visual motion. The functional connectivity patterns before and after training were contrasted to resting-state networks to assess the variability of common task-relevant networks, and the emergence of new functional interactions as a function of sensory history. One major finding is the emergence of a large-scale synchronization in the high γ (gamma: 60-120Hz) and β (beta: 15-30Hz) bands for individuals who underwent comodulated multisensory training. The post-training network involved prefrontal, parietal, and visual cortices. Our results suggest that the integration of evidence and decision-making strategies become more efficient following congruent multisensory training through plasticity in network routing and oscillatory regimes.
Collapse
|
43
|
Koticha P, Katge F, Shetty S, Patil DP. Effectiveness of Virtual Reality Eyeglasses as a Distraction Aid to Reduce Anxiety among 6-10-year-old Children Undergoing Dental Extraction Procedure. Int J Clin Pediatr Dent 2019; 12:297-302. [PMID: 31866714 PMCID: PMC6898869 DOI: 10.5005/jp-journals-10005-1640] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Introduction Distraction is commonly used nonpharmacologic pain management technique by pedodontists to manage pain and anxiety. There are some new techniques which uses audiovideo stimulation and distract the patient by exposing him or her to three-dimensional videos. These techniques are referred to as virtual reality audiovisual systems. The aim is to evaluate the effectiveness of virtual reality eyeglasses as a distraction aid to reduce anxiety of children undergoing extraction procedure. Objective The aim of this study is to evaluate the effectiveness of virtual reality eyeglasses as a distraction aid to reduce anxiety of children undergoing dental extraction procedure. Materials and methods Thirty children of age 6–10 years (n = 60) with bilateral carious primary molars indicated for extraction were randomly selected and divided into two groups of 30 each. The first one was group I (VR group) (n = 30) and group II (control group) (n = 30). Anxiety was measured by using Venham's picture test, pulse rate and oxygen saturation. Anxiety level between group I and group II was assessed using paired “t” test. Results The mean pulse rate values after extraction procedure in group I were 107.833 ± 1.356 and group II were 108.4 ± 0.927 respectively. The pulse rate values in intergroup comparison were found statistically significant p = 0.03. Conclusion The virtual reality used as a distraction technique improves the physiologic parameters of children aged 6–10 years but does not reduce the patient's self-reported anxiety according to Venham's picture test used. How to cite this article Koticha P, Katge F, et al. Effectiveness of Virtual Reality Eyeglasses as a Distraction Aid to Reduce Anxiety among 6–10-year-old Children Undergoing Dental Extraction Procedure. Int J Clin Pediatr Dent 2019;12(4):297–302.
Collapse
|
44
|
Jessen S, Fiedler L, Münte TF, Obleser J. Quantifying the individual auditory and visual brain response in 7-month-old infants watching a brief cartoon movie. Neuroimage 2019; 202:116060. [PMID: 31362048 DOI: 10.1016/j.neuroimage.2019.116060] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 07/05/2019] [Accepted: 07/26/2019] [Indexed: 11/16/2022] Open
Abstract
Electroencephalography (EEG) continues to be the most popular method to investigate cognitive brain mechanisms in young children and infants. Most infant studies rely on the well-established and easy-to-use event-related brain potential (ERP). As a severe disadvantage, ERP computation requires a large number of repetitions of items from the same stimulus-category, compromising both ERPs' reliability and their ecological validity in infant research. We here explore a way to investigate infant continuous EEG responses to an ongoing, engaging signal (i.e., "neural tracking") by using multivariate temporal response functions (mTRFs), an approach increasingly popular in adult EEG research. N = 52 infants watched a 5-min episode of an age-appropriate cartoon while the EEG signal was recorded. We estimated and validated forward encoding models of auditory-envelope and visual-motion features. We compared individual and group-based ('generic') models of the infant brain response to comparison data from N = 28 adults. The generic model yielded clearly defined response functions for both, the auditory and the motion regressor. Importantly, this response profile was present also on an individual level, albeit with lower precision of the estimate but above-chance predictive accuracy for the modelled individual brain responses. In sum, we demonstrate that mTRFs are a feasible way of analyzing continuous EEG responses in infants. We observe robust response estimates both across and within participants from only 5 min of recorded EEG signal. Our results open ways for incorporating more engaging and more ecologically valid stimulus materials when probing cognitive, perceptual, and affective processes in infants and young children.
Collapse
|
45
|
MRI acoustic noise-modulated computer animations for patient distraction and entertainment with application in pediatric psychiatric patients. Magn Reson Imaging 2019; 61:16-19. [PMID: 31078614 DOI: 10.1016/j.mri.2019.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 05/06/2019] [Accepted: 05/06/2019] [Indexed: 11/20/2022]
Abstract
PURPOSE To reduce patient anxiety caused by the MRI scanner acoustic noise. MATERIAL AND METHODS We developed a simple and low-cost system for patient distraction using visual computer animations that were synchronized to the MRI scanner's acoustic noise during the MRI exam. The system was implemented on a 3T MRI system and tested in 28 pediatric patients with bipolar disorder. The patients were randomized to receive noise-synchronized animations in the form of abstract animations in addition to music (n = 13, F/M = 6/7, age = 10.9 ± 2.5 years) or, as a control, receive only music (n = 15, F/M = 7/8, age = 11.6 ± 2.3 years). After completion of the scans, all subjects answered a questionnaire about their scan experience and the perceived scan duration. RESULTS The scan duration with multisensory input (animations and music) was perceived to be ~15% shorter than in the control group (43 min vs. 50 min, P < 0.05). However, the overall scan experience was scored less favorably (3.9 vs. 4.6 in the control group, P < 0.04). CONCLUSIONS This simple system provided patient distraction and entertainment leading to perceived shorter scan times, but the provided visualization with abstract animations was not favored by this patient cohort.
Collapse
|
46
|
Metrical congruency and kinematic familiarity facilitate temporal binding between musical and dance rhythms. Psychon Bull Rev 2019; 25:1416-1422. [PMID: 29766450 DOI: 10.3758/s13423-018-1480-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music-dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant's own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant's own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one's own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.
Collapse
|
47
|
Feldman JI, Kuang W, Conrad JG, Tu A, Santapuram P, Simon DM, Foss-Feig JH, Kwakye LD, Stevenson RA, Wallace MT, Woynaroski TG. Brief Report: Differences in Multisensory Integration Covary with Sensory Responsiveness in Children with and without Autism Spectrum Disorder. J Autism Dev Disord 2019; 49:397-403. [PMID: 30043353 DOI: 10.1007/s10803-018-3667-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Research shows that children with autism spectrum disorder (ASD) differ in their behavioral patterns of responding to sensory stimuli (i.e., sensory responsiveness) and in various other aspects of sensory functioning relative to typical peers. This study explored relations between measures of sensory responsiveness and multisensory speech perception and integration in children with and without ASD. Participants were 8-17 year old children, 18 with ASD and 18 matched typically developing controls. Participants completed a psychophysical speech perception task, and parents reported on children's sensory responsiveness. Psychophysical measures (e.g., audiovisual accuracy, temporal binding window) were associated with patterns of sensory responsiveness (e.g., hyporesponsiveness, sensory seeking). Results indicate that differences in multisensory speech perception and integration covary with atypical patterns of sensory responsiveness.
Collapse
|
48
|
Lunn J, Sjoblom A, Ward J, Soto-Faraco S, Forster S. Multisensory enhancement of attention depends on whether you are already paying attention. Cognition 2019; 187:38-49. [PMID: 30825813 DOI: 10.1016/j.cognition.2019.02.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/12/2019] [Accepted: 02/13/2019] [Indexed: 10/27/2022]
Abstract
Multisensory stimuli are argued to capture attention more effectively than unisensory stimuli due to their ability to elicit a super-additive neuronal response. However, behavioural evidence for enhanced multisensory attentional capture is mixed. Furthermore, the notion of multisensory enhancement of attention conflicts with findings suggesting that multisensory integration may itself be dependent upon top-down attention. The present research resolves this discrepancy by examining how both endogenous attentional settings and the availability of attentional capacity modulate capture by multisensory stimuli. Across a series of four studies, two measures of attentional capture were used which vary in their reliance on endogenous attention: facilitation and distraction. Perceptual load was additionally manipulated to determine whether multisensory stimuli are still able to capture attention when attention is occupied by a demanding primary task. Multisensory stimuli presented as search targets were consistently detected faster than unisensory stimuli regardless of perceptual load, although they are nevertheless subject to load modulation. In contrast, task irrelevant multisensory stimuli did not cause greater distraction than unisensory stimuli, suggesting that the enhanced attentional status of multisensory stimuli may be mediated by the availability of endogenous attention. Implications for multisensory alerts in practical settings such as driving and aviation are discussed, namely that these may be advantageous during demanding tasks, but may be less suitable to signaling unexpected events.
Collapse
|
49
|
Barutchu A, Sahu A, Humphreys GW, Spence C. Multisensory processing in event-based prospective memory. Acta Psychol (Amst) 2019; 192:23-30. [PMID: 30391627 DOI: 10.1016/j.actpsy.2018.10.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 08/29/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022] Open
Abstract
Failures in prospective memory (PM) - that is, the failure to remember intended future actions - can have adverse consequences. It is therefore important to study those processes that may help to minimize such cognitive failures. Although multisensory integration has been shown to enhance a wide variety of behaviors, including perception, learning, and memory, its effect on prospective memory, in particular, is largely unknown. In the present study, we investigated the effects of multisensory processing on two simultaneously-performed memory tasks: An ongoing 2- or 3-back working memory (WM) task (20% target ratio), and a PM task in which the participants had to respond to a rare predefined letter (8% target ratio). For PM trials, multisensory enhancement was observed for congruent multisensory signals; however, this effect did not generalize to the ongoing WM task. Participants were less likely to make errors for PM than for WM trials, thus suggesting that they may have biased their attention toward the PM task. Multisensory advantages on memory tasks, such as PM and WM, may be dependent on how attention resources are allocated across dual tasks.
Collapse
|
50
|
Ju A, Orchard-Mills E, van der Burg E, Alais D. Rapid Audiovisual Temporal Recalibration Generalises Across Spatial Location. Multisens Res 2019; 32:215-234. [PMID: 31071679 DOI: 10.1163/22134808-20191176] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 03/01/2019] [Indexed: 11/19/2022]
Abstract
Recent exposure to asynchronous multisensory signals has been shown to shift perceived timing between the sensory modalities, a phenomenon known as 'temporal recalibration'. Recently, Van der Burg et al. (2013, J Neurosci, 33, pp. 14633-14637) reported results showing that recalibration to asynchronous audiovisual events can happen extremely rapidly. In an extended series of variously asynchronous trials, simultaneity judgements were analysed based on the modality order in the preceding trial and showed that shifts in the point of subjective synchrony occurred almost instantaneously, shifting from one trial to the next. Here we replicate the finding that shifts in perceived timing occur following exposure to a single, asynchronous audiovisual stimulus and by manipulating the spatial location of the audiovisual events we demonstrate that recalibration occurs even when the adapting stimulus is presented in a different location. Timing shifts were also observed when the adapting audiovisual pair were defined only by temporal proximity, with the auditory component presented over headphones rather than being collocated with the visual stimulus. Combined with previous findings showing that timing shifts are independent of stimulus features such as colour and pitch, our finding that recalibration is not spatially specific provides strong evidence for a rapid recalibration process that is solely dependent on recent temporal information, regardless of feature or location. These rapid and automatic shifts in perceived synchrony may allow our sensory systems to flexibly adjust to the variation in timing of neural signals occurring as a result of delayed environmental transmission and differing neural latencies for processing vision and audition.
Collapse
|