1
|
Orioli G, Dragovic D, Farroni T. Perception of visual and audiovisual trajectories toward and away from the body in the first postnatal year. J Exp Child Psychol 2024; 243:105921. [PMID: 38615600 DOI: 10.1016/j.jecp.2024.105921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 04/16/2024]
Abstract
Perceiving motion in depth is important in everyday life, especially motion in relation to the body. Visual and auditory cues inform us about motion in space when presented in isolation from each other, but the most comprehensive information is obtained through the combination of both of these cues. We traced the development of infants' ability to discriminate between visual motion trajectories across peripersonal space and to match these with auditory cues specifying the same peripersonal motion. We measured 5-month-old (n = 20) and 9-month-old (n = 20) infants' visual preferences for visual motion toward or away from their body (presented simultaneously and side by side) across three conditions: (a) visual displays presented alone, (b) paired with a sound increasing in intensity, and (c) paired with a sound decreasing in intensity. Both groups preferred approaching motion in the visual-only condition. When the visual displays were paired with a sound increasing in intensity, neither group showed a visual preference. When a sound decreasing in intensity was played instead, the 5-month-olds preferred the receding (spatiotemporally congruent) visual stimulus, whereas the 9-month-olds preferred the approaching (spatiotemporally incongruent) visual stimulus. We speculate that in the approaching sound condition, the behavioral salience of the sound could have led infants to focus on the auditory information alone, in order to prepare a motor response, and to neglect the visual stimuli. In the receding sound condition, instead, the difference in response patterns in the two groups may have been driven by infants' emerging motor abilities and their developing predictive processing mechanisms supporting and influencing each other.
Collapse
Affiliation(s)
- Giulia Orioli
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham B15 2SB, UK; Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy.
| | - Danica Dragovic
- Paediatric Unit, Hospital of Monfalcone, 34074 Monfalcone, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy
| |
Collapse
|
2
|
Bruns P, Thun C, Röder B. Quantifying accuracy and precision from continuous response data in studies of spatial perception and crossmodal recalibration. Behav Res Methods 2024:10.3758/s13428-024-02416-1. [PMID: 38684625 DOI: 10.3758/s13428-024-02416-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2024] [Indexed: 05/02/2024]
Abstract
The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany.
| | - Caroline Thun
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
3
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
4
|
Ahn E, Majumdar A, Lee T, Brang D. Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS. bioRxiv 2023:2023.11.27.568892. [PMID: 38077093 PMCID: PMC10705272 DOI: 10.1101/2023.11.27.568892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept known as the McGurk effect. This illusion has been widely used to study audiovisual speech integration, illustrating that auditory and visual cues are combined in the brain to generate a single coherent percept. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect reflect largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily impair processing while subjects were presented with either incongruent (McGurk) or congruent audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS significantly reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation did not affect the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.
Collapse
Affiliation(s)
- EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - Areti Majumdar
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - Taraz Lee
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| |
Collapse
|
5
|
Lembke SA. Distinguishing between straight and curved sounds: Auditory shape in pitch, loudness, and tempo gestures. Atten Percept Psychophys 2023; 85:2751-2773. [PMID: 37721687 PMCID: PMC10600048 DOI: 10.3758/s13414-023-02764-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2023] [Indexed: 09/19/2023]
Abstract
Sound-based trajectories or sound gestures draw links to spatiokinetic processes. For instance, a gliding, decreasing pitch conveys an analogous downward motion or fall. Whereas the gesture's pitch orientation and range convey its meaning and magnitude, respectively, the way in which pitch changes over time can be conceived of as gesture shape, which to date has rarely been studied in isolation. This article reports on an experiment that studied the perception of shape in uni-directional pitch, loudness, and tempo gestures, each assessed for four physical scalings. Gestures could increase or decrease over time and comprised different frequency and sound level ranges, durations, and different scaling contexts. Using a crossmodal-matching task, participants could reliably distinguish between pitch and loudness gestures and relate them to analogous visual line segments. Scalings based on equivalent-rectangular bandwidth (ERB) rate for pitch and raw signal amplitude for loudness were matched closest to a straight line, whereas other scalings led to perceptions of exponential or logarithmic curvatures. The investigated tempo gestures, by contrast, did not yield reliable differences. The reliable, robust perception of gesture shape for pitch and loudness has implications on various sound-design applications, especially those cases that rely on crossmodal mappings, e.g., visual analysis or control interfaces like audio waveforms or spectrograms. Given its perceptual relevance, auditory shape appears to be an integral part of sound gestures, while illustrating how crossmodal correspondences can underpin auditory perception.
Collapse
Affiliation(s)
- Sven-Amin Lembke
- Cambridge School of Creative Industries, Anglia Ruskin University, Cambridge, UK.
| |
Collapse
|
6
|
Czepiel A, Fink LK, Seibert C, Scharinger M, Kotz SA. Aesthetic and physiological effects of naturalistic multimodal music listening. Cognition 2023; 239:105537. [PMID: 37487303 DOI: 10.1016/j.cognition.2023.105537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 05/31/2023] [Accepted: 06/24/2023] [Indexed: 07/26/2023]
Abstract
Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.
Collapse
Affiliation(s)
- Anna Czepiel
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lauren K Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Max Planck-NYU Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Christoph Seibert
- Institute for Music Informatics and Musicology, University of Music Karlsruhe, Karlsruhe, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Department of German Linguistics, University of Marburg, Marburg, Germany; Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
7
|
Daşdöğen Ü, Awan SN, Bottalico P, Iglesias A, Getchell N, Abbott KV. The Influence of Multisensory Input On Voice Perception and Production Using Immersive Virtual Reality. J Voice 2023:S0892-1997(23)00235-7. [PMID: 37739864 DOI: 10.1016/j.jvoice.2023.07.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/27/2023] [Accepted: 07/27/2023] [Indexed: 09/24/2023]
Abstract
OBJECTIVES The purpose was to examine the influence of auditory vs visual vs combined audiovisual input on perception and production of one's own voice, using immersive virtual reality technology. METHODS Thirty-one vocally healthy men and women were investigated under 18 sensory input conditions, using immersive virtual reality technology. Conditions included two auditory rooms with varying reverberation times, two visual rooms with varying volumes, and the combination of audiovisual conditions. All conditions were repeated with and without background noise. Speech tasks included counting, sustained vowel phonation, an all-voiced sentence from the Consensus Auditory-Perceptual Evaluation of Voice, and the first sentence from the Rainbow Passage, randomly ordered. Perception outcome measures were participants' self-reported perceptions of their vocal loudness, vocal effort, and vocal comfort in speech. Production outcome measures were sound pressure level (SPL) and spectral moments (spectral mean and standard deviation in Hz, skewness, and kurtosis). Statistical analyses used self-reported vocal effort, vocal loudness, and vocal comfort in percent (0 = "not at all," 100 = extremely), SPL in dB, and spectral moments in Hz. The reference level was a baseline audiovisual deprivation condition. RESULTS Results suggested (i) increased self-perceived vocal loudness and effort, and decreased comfort, with increasing room volume, speaker-to-listener distance, audiovisual input, and background noise, and (ii) increased SPL and fluctuations in spectral moments across conditions. CONCLUSIONS Not only auditory, but also visual and audiovisual input influenced voice perception and production in ways that have not been previously documented. Findings contribute to the basic science understanding the role of visual, audiovisual and auditory input in voice perception and production, and also to models of voice training and therapy. The findings also set the foundation for the use of virtual reality in voice and speech training, as a potentially power solution to the generalization problem.
Collapse
Affiliation(s)
- Ümit Daşdöğen
- Mount Sinai Health System, Department of Otolaryngology, New York, NY.
| | - Shaheen N Awan
- University of Central Florida, Communication Sciences and Disorders, Orlando, FL
| | - Pasquale Bottalico
- University of Illinois Urbana-Champaign, Department of Speech and Hearing Science, Champaign, IL
| | - Aquiles Iglesias
- University of Delaware, Communication Sciences and Disorders, Newark, DE
| | - Nancy Getchell
- University of Delaware, Kinesiology & Applied Physiology, Newark, DE
| | - Katherine Verdolini Abbott
- Mount Sinai Health System, Department of Otolaryngology, New York, NY; University of Illinois Urbana-Champaign, Department of Speech and Hearing Science, Champaign, IL
| |
Collapse
|
8
|
Pulliam G, Feldman JI, Woynaroski TG. Audiovisual multisensory integration in individuals with reading and language impairments: A systematic review and meta-analysis. Neurosci Biobehav Rev 2023; 149:105130. [PMID: 36933815 PMCID: PMC10243286 DOI: 10.1016/j.neubiorev.2023.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023]
Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA
| | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; John A. Burns School of Medicine, University of Hawaii, Manoa, HI, USA
| |
Collapse
|
9
|
Malouka S, Loria T, Crainic V, Thaut MH, Tremblay L. Auditory cueing facilitates temporospatial accuracy of sequential movements. Hum Mov Sci 2023; 89:103087. [PMID: 37060619 DOI: 10.1016/j.humov.2023.103087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/01/2023] [Accepted: 04/04/2023] [Indexed: 04/17/2023]
Abstract
Effectively executing goal-directed behaviours requires both temporal and spatial accuracy. Previous work has shown that providing auditory cues enhances the timing of upper-limb movements. Interestingly, alternate work has shown beneficial effects of multisensory cueing (i.e., combined audiovisual) on temporospatial motor control. As a result, it is not clear whether adding visual to auditory cues can enhance the temporospatial control of sequential upper-limb movements specifically. The present study utilized a sequential pointing task to investigate the effects of auditory, visual, and audiovisual cueing on temporospatial errors. Eighteen participants performed pointing movements to five targets representing short, intermediate, and large movement amplitudes. Five isochronous auditory, visual, or audiovisual priming cues were provided to specify an equal movement duration for all amplitudes prior to movement onset. Movement time errors were then computed as the difference between actual and predicted movement times specified by the sensory cues, yielding delta movement time errors (ΔMTE). It was hypothesized that auditory-based (i.e., auditory and audiovisual) cueing would yield lower movement time errors compared to visual cueing. The results showed that providing auditory relative to visual priming cues alone reduced ΔMTE particularly for intermediate amplitude movements. The results further highlighted the beneficial impact of unimodal auditory cueing for improving visuomotor control in the absence of significant effects for the multisensory audiovisual condition.
Collapse
Affiliation(s)
- Selina Malouka
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| | - Tristan Loria
- Music and Health Research Collaboratory (MaHRC), Faculty of Music, University of Toronto, Toronto, Canada.
| | - Valentin Crainic
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| | - Michael H Thaut
- Music and Health Research Collaboratory (MaHRC), Faculty of Music, University of Toronto, Toronto, Canada.
| | - Luc Tremblay
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| |
Collapse
|
10
|
Conner BC, Fang Y, Lerner ZF. Under pressure: design and validation of a pressure-sensitive insole for ankle plantar flexion biofeedback during neuromuscular gait training. J Neuroeng Rehabil 2022; 19:135. [PMID: 36482447 PMCID: PMC9732996 DOI: 10.1186/s12984-022-01119-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 11/23/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Electromyography (EMG)-based audiovisual biofeedback systems, developed and tested in research settings to train neuromuscular control in patient populations such as cerebral palsy (CP), have inherent implementation obstacles that may limit their translation to clinical practice. The purpose of this study was to design and validate an alternative, plantar pressure-based biofeedback system for improving ankle plantar flexor recruitment during walking in individuals with CP. METHODS Eight individuals with CP (11-18 years old) were recruited to test both an EMG-based and a plantar pressure-based biofeedback system while walking. Ankle plantar flexor muscle recruitment, co-contraction at the ankle, and lower limb kinematics were compared between the two systems and relative to baseline walking. RESULTS Relative to baseline walking, both biofeedback systems yielded significant increases in mean soleus (43-58%, p < 0.05), and mean (68-70%, p < 0.05) and peak (71-82%, p < 0.05) medial gastrocnemius activation, with no differences between the two systems and strong relationships for all primary outcome variables (R = 0.89-0.94). Ankle co-contraction significantly increased relative to baseline only with the EMG-based system (52%, p = 0.03). CONCLUSION These findings support future research on functional training with this simple, low-cost biofeedback modality.
Collapse
Affiliation(s)
- Benjamin C. Conner
- grid.134563.60000 0001 2168 186XCollege of Medicine–Phoenix, University of Arizona, Phoenix, AZ USA
| | - Ying Fang
- grid.261120.60000 0004 1936 8040Department of Mechanical Engineering, Northern Arizona University, 15600 S McConnell Drive, NAU EGR Bldg 69, Flagstaff, AZ 86011 USA
| | - Zachary F. Lerner
- grid.134563.60000 0001 2168 186XCollege of Medicine–Phoenix, University of Arizona, Phoenix, AZ USA ,grid.261120.60000 0004 1936 8040Department of Mechanical Engineering, Northern Arizona University, 15600 S McConnell Drive, NAU EGR Bldg 69, Flagstaff, AZ 86011 USA
| |
Collapse
|
11
|
Tachmatzidou O, Paraskevoudi N, Vatakis A. Exposure to multisensory and visual static or moving stimuli enhances processing of nonoptimal visual rhythms. Atten Percept Psychophys 2022. [PMID: 36241841 DOI: 10.3758/s13414-022-02569-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2022] [Indexed: 11/25/2022]
Abstract
Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported auditory dominance in rhythm perception is due to the use of nonoptimal visual stimuli. Yet it remains unknown whether exposure to multisensory or visual-moving rhythms would benefit the processing of rhythms consisting of nonoptimal static visual stimuli. Using a perceptual learning paradigm, we tested whether the visual component of the multisensory training pair can affect processing of metric simple two integer-ratio nonoptimal visual rhythms. Participants were trained with static (AVstat), moving-inanimate (AVinan), or moving-animate (AVan) visual stimuli along with auditory tones and a regular beat. In the pre- and posttraining tasks, participants responded whether two static-visual rhythms differed or not. Results showed improved posttraining performance for all training groups irrespective of the type of visual stimulation. To assess whether this benefit was auditory driven, we introduced visual-only training with a moving or static stimulus and a regular beat (Vinan). Comparisons between Vinan and Vstat showed that, even in the absence of auditory information, training with visual-only moving or static stimuli resulted in an enhanced posttraining performance. Overall, our findings suggest that audiovisual and visual static or moving training can benefit processing of nonoptimal visual rhythms.
Collapse
|
12
|
Abstract
Hearing a task-irrelevant sound during object encoding can improve visual recognition memory when the sound is object-congruent (e.g., a dog and a bark). However, previous studies have only used binary old/new memory tests, which do not distinguish between recognition based on the recollection of details about the studied event or stimulus familiarity. In the present research, we hypothesized that hearing a task-irrelevant but semantically congruent natural sound at encoding would facilitate the formation of richer memory representations, resulting in increased recollection of details of the encoded event. Experiment 1 replicates previous studies showing that participants were more confident about their memory for items that were initially encoded with a congruent sound compared to an incongruent sound. Experiment 2 suggests that congruent object-sound pairings specifically facilitate recollection and not familiarity-based recognition memory, and Experiment 3 demonstrates that this effect was coupled with more accurate memory for audiovisual congruency of the item and sound from encoding rather than another aspect of the episode. These results suggest that even when congruent sounds are task-irrelevant, they promote formation of multisensory memories and subsequent recollection-based retention. Given the ubiquity of encounters with multisensory objects in our everyday lives, considering their impact on episodic memory is integral to building models of memory that apply to naturalistic settings.
Collapse
|
13
|
Drijvers L, Holler J. The multimodal facilitation effect in human communication. Psychon Bull Rev 2022. [PMID: 36138282 DOI: 10.3758/s13423-022-02178-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2022] [Indexed: 11/08/2022]
Abstract
During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.
Collapse
|
14
|
Zhou P, Zong S, Xi X, Xiao H. Effect of wearing personal protective equipment on acoustic characteristics and speech perception during COVID-19. Appl Acoust 2022; 197:108940. [PMID: 35892074 PMCID: PMC9304077 DOI: 10.1016/j.apacoust.2022.108940] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 05/25/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
With the COVID-19 pandemic, the usage of personal protective equipment (PPE) has become 'the new normal'. Both surgical masks and N95 masks with a face shield are widely used in healthcare settings to reduce virus transmission, but the use of these masks has a negative impact on speech perception. Therefore, transparent masks are recommended to solve this dilemma. However, there is a lack of quantitative studies regarding the effect of PPE on speech perception. This study aims to compare the effect on speech perception of different types of PPE (surgical masks, N95 masks with face shield and transparent masks) in healthcare settings, for listeners with normal hearing in the audiovisual or auditory-only modality. The Bamford-Kowal-Bench (BKB)-like Mandarin speech stimuli were digitally recorded by a G.R.A.S KEMAR manikin without and with masks (surgical masks, N95 masks with face shield and transparent masks). Two variants of video display were created (with or without visual cues) and tagged to the corresponding audio recordings. The speech recording and video were presented to listeners simultaneously in each of four conditions: unattenuated speech with visual cues (no mask); surgical mask attenuated speech without visual cues; N95 mask with face shield attenuated speech without visual cues; and transparent mask attenuated speech with visual cues. The signal-to-noise ratio for 50 % correct scores (SNR50) threshold in noise was measured for each condition in the presence of four-talker babble. Twenty-four subjects completed the experiment. Acoustic spectra obtained from all types of masks were primarily attenuated at high frequencies, beyond 3 kHz, but to different extents. The mean SNR50 thresholds of the two auditory-only conditions (surgical mask and N95 mask with face shield) were higher than those of the audiovisual conditions (no mask and transparent mask). SNR50 thresholds in the surgical-mask conditions were significantly lower than those for the N95 masks with face shield. No significant difference was observed between the two audiovisual conditions. The results confirm that wearing a surgical mask or an N95 mask with face shield has a negative impact on speech perception. However, wearing a transparent mask improved speech perception to a similar level as unmasked condition for young normal-hearing listeners.
Collapse
Affiliation(s)
- Peng Zhou
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Shimin Zong
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Xin Xi
- Senior Department of Otolaryngology - Head & Neck Surgery, The Sixth Medical Center of PLA General Hospital, Beijing, China
| | - Hongjun Xiao
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| |
Collapse
|
15
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
16
|
Brang D, Plass J, Sherman A, Stacey WC, Wasade VS, Grabowecky M, Ahn E, Towle VL, Tao JX, Wu S, Issa NP, Suzuki S. Visual cortex responds to sound onset and offset during passive listening. J Neurophysiol 2022; 127:1547-1563. [PMID: 35507478 DOI: 10.1152/jn.00164.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. Additionally, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Aleksandra Sherman
- Department of Cognitive Science, Occidental College, Los Angeles, CA, United States
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI, United States
| | | | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, IL, United States
| | - EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, IL, United States
| |
Collapse
|
17
|
Wegner-Clemens K, Malcolm GL, Shomstein S. How much is a cow like a meow? A novel database of human judgements of audiovisual semantic relatedness. Atten Percept Psychophys 2022; 84:1317-27. [PMID: 35449432 DOI: 10.3758/s13414-022-02488-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/27/2022] [Indexed: 11/08/2022]
Abstract
Semantic information about objects, events, and scenes influences how humans perceive, interact with, and navigate the world. The semantic information about any object or event can be highly complex and frequently draws on multiple sensory modalities, which makes it difficult to quantify. Past studies have primarily relied on either a simplified binary classification of semantic relatedness based on category or on algorithmic values based on text corpora rather than human perceptual experience and judgement. With the aim to further accelerate research into multisensory semantics, we created a constrained audiovisual stimulus set and derived similarity ratings between items within three categories (animals, instruments, household items). A set of 140 participants provided similarity judgments between sounds and images. Participants either heard a sound (e.g., a meow) and judged which of two pictures of objects (e.g., a picture of a dog and a duck) it was more similar to, or saw a picture (e.g., a picture of a duck) and selected which of two sounds it was more similar to (e.g., a bark or a meow). Judgements were then used to calculate similarity values of any given cross-modal pair. An additional 140 participants provided word judgement to calculate similarity of word-word pairs. The derived and reported similarity judgements reflect a range of semantic similarities across three categories and items, and highlight similarities and differences among similarity judgments between modalities. We make the derived similarity values available in a database format to the research community to be used as a measure of semantic relatedness in cognitive psychology experiments, enabling more robust studies of semantics in audiovisual environments.
Collapse
|
18
|
Shimokawa K, Matsumoto K, Yokota H, Kobayashi E, Hirano Y, Masuda Y, Uno T. Anxiety relaxation during MRI with a patient-friendly audiovisual system. Radiography (Lond) 2022; 28:725-731. [PMID: 35428571 DOI: 10.1016/j.radi.2022.03.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 03/19/2022] [Accepted: 03/24/2022] [Indexed: 11/30/2022]
Abstract
INTRODUCTION Many patients experience anxiety, not limited to claustrophobia, before magnetic resonance imaging (MRI) examination. We performed a non-randomized controlled trial to evaluate whether a patient-friendly audiovisual (AV) system in the MR scanner room reduces patient anxiety. METHODS We randomly selected 61 participants from outpatients who required brain MRI examination. Patients were informed that they could choose to undergo an MRI examination with a patient-friendly AV system (Ambient Experience, Philips Healthcare, Best, The Netherlands) or the standard system. To complete the MRI examination without affecting clinical practice, all patients who preferred the patient-friendly AV system were assigned to the preferring AV group. Patients who indicated that either system was acceptable were randomly assigned to the no preference but allocated AV group or control (using the standard system) groups. In both groups, state anxiety using the State-Trait Anxiety Inventory (STAI) was assessed before and after the MRI examination (A-State-before and A-State-after MRI, respectively). The changes in A-State-before and A-State-after MRI were categorized as follows: relieved high-state anxiety, no change in high-state anxiety, stable easiness, and intensified anxiety. RESULTS Among the 61 included patients, 19 were assigned to the preferring AV group, 20 to the no preference but allocated AV group, and 22 to the control group. There were no significant differences between the group. However, in patients with high-state anxiety before MRI, the preferring AV group and the no preference but allocated AV group, which used the patient-friendly AV system, relieved high-state anxiety by 63.6% (7 of 11 patients) and 81.8% (9 of 11 patients), respectively. In contrast, the control group using the standard system relieved high-level anxiety by only 42.9% (three out of seven patients). CONCLUSION The patient-friendly AV system may reduce anxiety in patients undergoing MRI examinations. IMPLICATIONS FOR PRACTICE The patient-friendly AV system may reduce anxiety in patients undergoing MRI examination by providing a more patient-centered MRI examination environment. These findings may help ameliorate negative perceptions associated with MRI examination.
Collapse
Affiliation(s)
- K Shimokawa
- Department of Radiology, Chiba University Hospital, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-8677, Japan.
| | - K Matsumoto
- Department of Radiology, Chiba University Hospital, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-8677, Japan.
| | - H Yokota
- Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-8670, Japan.
| | - E Kobayashi
- Department of Neurosurgery, National Hospital Organization Chiba Medical Center, 4-1-2 Tsubakihara, Chuo-ku, Chiba-shi, Chiba 260-8606, Japan.
| | - Y Hirano
- Research Center for Child Mental Development, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-0856, Japan.
| | - Y Masuda
- Department of Radiology, Chiba University Hospital, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-8677, Japan.
| | - T Uno
- Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba 260-8670, Japan.
| |
Collapse
|
19
|
Abstract
Increasing research has revealed that uninformative spatial sounds facilitate the early processing of visual stimuli. This study examined the crossmodal interactions of semantically congruent stimuli by assessing whether the presentation of event-related characteristic sounds facilitated or interfered with the visual search for corresponding event scenes in pictures. The search array consisted of four images: one target and three non-target pictures. Auditory stimuli were presented to participants in synchronization with picture onset using three types of sounds: a sound congruent with a target, a sound congruent with a distractor, or a control sound. The control sound varied across six experiments, alternating between a sound unrelated to the search stimuli, white noise, and no sound. Participants were required to swiftly localize a target position while ignoring the sound presentation. Visual localization resulted in rapid responses when a sound that was semantically related to the target was played. Furthermore, when a sound was semantically related to a distractor picture, the response times were longer. When the distractor-congruent sound was used, participants incorrectly localized the distractor position more often than at the chance level. These findings were replicated when the experiments ruled out the possibility that participants would learn picture-sound pairs during the visual tasks (i.e., the possibility of brief training during the experiments). Overall, event-related crossmodal interactions occur based on semantic representations, and audiovisual associations may develop as a result of long-term experiences rather than brief training in a laboratory.
Collapse
|
20
|
Audet DJ, Gray WO, Brown AD. Audiovisual training rapidly reduces potentially hazardous perceptual errors caused by earplugs. Hear Res 2022; 414:108394. [PMID: 34911017 PMCID: PMC8761180 DOI: 10.1016/j.heares.2021.108394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 10/27/2021] [Accepted: 11/09/2021] [Indexed: 02/03/2023]
Abstract
Our ears capture sound from all directions but do not encode directional information explicitly. Instead, subtle acoustic features associated with unique sound source locations must be learned through experience. Surprisingly, aspects of this mapping process remain highly plastic throughout adulthood: Adult human listeners can accommodate acutely modified acoustic inputs ("new ears") over a period of a few weeks to recover near-normal sound localization, and this process can be accelerated with explicit training. Here we evaluated the extent of such plasticity given only transient exposure to distorted inputs. Distortions were produced via earplugs, which severely degrade sound localization performance, constraining their usability in real-world settings that require accurate directional hearing. Localization was measured over a period of ten weeks. Provision of feedback via simple paired auditory and visual stimuli led to a rapid decrease in the occurrence of large errors (responses >|±30°| from target) despite only once-weekly exposure to the altered inputs. Moreover, training effects generalized to untrained sound source locations. Lesser but qualitatively similar improvements were observed in a group of subjects that did not receive explicit feedback. In total, data demonstrate that even transient exposure to altered spatial acoustic information is sufficient for meaningful perceptual improvement (i.e., chronic exposure is not required), offering insight on the nature and time course of perceptual learning in the context of spatial hearing. Data also suggest that the large and potentially hazardous errors in localization caused by earplugs can be mitigated with appropriate training, offering a practical means to increase their usability.
Collapse
Affiliation(s)
- David J Audet
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - William O Gray
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States; Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA 98195, United States.
| |
Collapse
|
21
|
Hirst RJ, Setti A, De Looze C, Kenny RA, Newell FN. Multisensory integration precision is associated with better cognitive performance over time in older adults: A large-scale exploratory study. Aging Brain 2022; 2:100038. [PMID: 36908873 PMCID: PMC9997173 DOI: 10.1016/j.nbas.2022.100038] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 02/17/2022] [Accepted: 03/01/2022] [Indexed: 11/15/2022] Open
Abstract
Age-related sensory decline impacts cognitive performance and exposes individuals to a greater risk of cognitive decline. Integration across the senses also changes with age, yet the link between multisensory perception and cognitive ageing is poorly understood. We explored the relationship between multisensory integration and cognitive function in 2875 adults aged 50 + from The Irish Longitudinal Study on Ageing. Multisensory integration was assessed at several audio-visual temporal asynchronies using the Sound Induced Flash Illusion (SIFI). More precise integration (i.e. less illusion susceptibility with larger temporal asynchronies) was cross-sectionally associated with faster Choice Response Times and Colour Trail Task performance, and fewer errors on the Sustained Attention to Response Task. We then used k-means clustering to identify groups with different 10-year cognitive trajectories on measures available longitudinally; delayed recall, immediate recall and verbal fluency. Across measures, groups with consistently higher performance trajectories had more precise multisensory integration. These findings support broad links between multisensory integration and several cognitive measures, including processing speed, attention and memory, rather than association with any specific subdomain.
Collapse
Affiliation(s)
- Rebecca J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- Corresponding author at: Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- School of Applied Psychology, University College Cork, Ireland
| | - Céline De Looze
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
22
|
Dias JW, McClaskey CM, Harris KC. Early auditory cortical processing predicts auditory speech in noise identification and lipreading. Neuropsychologia 2021; 161:108012. [PMID: 34474065 DOI: 10.1016/j.neuropsychologia.2021.108012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 08/20/2021] [Accepted: 08/26/2021] [Indexed: 10/20/2022]
Abstract
Individuals typically exhibit better cross-sensory perception following unisensory loss, demonstrating improved perception of information available from the remaining senses and increased cross-sensory use of neural resources. Even individuals with no sensory loss will exhibit such changes in cross-sensory processing following temporary sensory deprivation, suggesting that the brain's capacity for recruiting cross-sensory sources to compensate for degraded unisensory input is a general characteristic of the perceptual process. Many studies have investigated how auditory and visual neural structures respond to within- and cross-sensory input. However, little attention has been given to how general auditory and visual neural processing relates to within and cross-sensory perception. The current investigation examines the extent to which individual differences in general auditory neural processing accounts for variability in auditory, visual, and audiovisual speech perception in a sample of young healthy adults. Auditory neural processing was assessed using a simple click stimulus. We found that individuals with a smaller P1 peak amplitude in their auditory-evoked potential (AEP) had more difficulty identifying speech sounds in difficult listening conditions, but were better lipreaders. The results suggest that individual differences in the auditory neural processing of healthy adults can account for variability in the perception of information available from the auditory and visual modalities, similar to the cross-sensory perceptual compensation observed in individuals with sensory loss.
Collapse
Affiliation(s)
- James W Dias
- Medical University of South Carolina, United States.
| | | | | |
Collapse
|
23
|
Borgolte A, Roy M, Sinke C, Wiswede D, Stephan M, Bleich S, Münte TF, Szycik GR. Enhanced attentional processing during speech perception in adult high-functioning autism spectrum disorder: An ERP-study. Neuropsychologia 2021; 161:108022. [PMID: 34530026 DOI: 10.1016/j.neuropsychologia.2021.108022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 08/17/2021] [Accepted: 09/09/2021] [Indexed: 12/16/2022]
Abstract
Deficits in audiovisual speech perception have consistently been detected in patients with Autism Spectrum Disorder (ASD). Especially for patients with a highly functional subtype of ASD, it remains uncertain whether these deficits and underlying neural mechanisms persist into adulthood. Research indicates differences in audiovisual speech processing between ASD and healthy controls (HC) in the auditory cortex. The temporal dynamics of these differences still need to be characterized. Thus, in the present study we examined 14 adult subjects with high-functioning ASD and 15 adult HC while they viewed visual (lip movements) and auditory (voice) speech information that was either superimposed by white noise (condition 1) or not (condition 2). Subject's performance was quantified by measuring stimulus comprehension. In addition, event-related brain potentials (ERPs) were recorded. Results demonstrated worse speech comprehension for ASD subjects compared to HC under noisy conditions. Moreover, ERP-analysis revealed significantly higher P2 amplitudes over parietal electrodes for ASD subjects compared to HC.
Collapse
Affiliation(s)
- Anna Borgolte
- Dept. of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany.
| | - Mandy Roy
- Asklepios, Psychiatric Hospital Ochsenzoll, Hamburg, Germany
| | - Christopher Sinke
- Dept. of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Daniel Wiswede
- Dept. of Neurology, University of Lübeck, Lübeck, Germany
| | - Michael Stephan
- Dept. of Psychosomatic Medicine and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Dept. of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany; Center of Systems Neuroscience, Hanover, Germany
| | - Thomas F Münte
- Dept. of Neurology, University of Lübeck, Lübeck, Germany
| | - Gregor R Szycik
- Dept. of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
24
|
Galazka MA, Hadjikhani N, Sundqvist M, Åsberg Johnels J. Facial speech processing in children with and without dyslexia. Ann Dyslexia 2021; 71:501-524. [PMID: 34115279 PMCID: PMC8458188 DOI: 10.1007/s11881-021-00231-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 06/04/2023]
Abstract
What role does the presence of facial speech play for children with dyslexia? Current literature proposes two distinctive claims. One claim states that children with dyslexia make less use of visual information from the mouth during speech processing due to a deficit in recruitment of audiovisual areas. An opposing claim suggests that children with dyslexia are in fact reliant on such information in order to compensate for auditory/phonological impairments. The current paper aims at directly testing these contrasting hypotheses (here referred to as "mouth insensitivity" versus "mouth reliance") in school-age children with and without dyslexia, matched on age and listening comprehension. Using eye tracking, in Study 1, we examined how children look at the mouth across conditions varying in speech processing demands. The results did not indicate significant group differences in looking at the mouth. However, correlation analyses suggest potentially important distinctions within the dyslexia group: those children with dyslexia who are better readers attended more to the mouth while presented with a person's face in a phonologically demanding condition. In Study 2, we examined whether the presence of facial speech cues is functionally beneficial when a child is encoding written words. The results indicated lack of overall group differences on the task, although those with less severe reading problems in the dyslexia group were more accurate when reading words that were presented with articulatory facial speech cues. Collectively, our results suggest that children with dyslexia differ in their "mouth reliance" versus "mouth insensitivity," a profile that seems to be related to the severity of their reading problems.
Collapse
Affiliation(s)
- Martyna A Galazka
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Harvard Medical School/MGH/MIT, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
| | - Maria Sundqvist
- Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
25
|
Onishi A. Brain-computer interface with rapid serial multimodal presentation using artificial facial images and voice. Comput Biol Med 2021; 136:104685. [PMID: 34343888 DOI: 10.1016/j.compbiomed.2021.104685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 07/22/2021] [Accepted: 07/22/2021] [Indexed: 10/20/2022]
Abstract
Electroencephalography (EEG) signals elicited by multimodal stimuli can drive brain-computer interfaces (BCIs), and research has demonstrated that visual and auditory stimuli can be employed simultaneously to improve BCI performance. However, no studies have investigated the effect of multimodal stimuli in rapid serial visual presentation (RSVP) BCIs. The present study proposed a rapid serial multimodal presentation (RSMP) BCI that incorporates artificial facial images and artificial voice stimuli. To clarify the effect of audiovisual stimuli on the RSMP BCI, scrambled images and masked sounds were applied instead of visual and auditory stimuli, respectively. The findings indicated that the audiovisual stimuli improved performance of the RSMP BCI, and that P300 at Pz contributed to classification accuracy. Online accuracy of the BCI reached 85.7 ± 11.5 %. Taken together, these findings may aid in the development of better gaze-independent BCI systems.
Collapse
Affiliation(s)
- A Onishi
- Department of Electronic Systems Engineering, National Institute of Technology, Kagawa College, 551, Kohda, Takuma-cho, Mitoyo-shi, Kagawa, 769-1192, Japan; Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Japan.
| |
Collapse
|
26
|
Chai Y, Liu TT, Marrett S, Li L, Khojandi A, Handwerker DA, Alink A, Muckli L, Bandettini PA. Topographical and laminar distribution of audiovisual processing within human planum temporale. Prog Neurobiol 2021; 205:102121. [PMID: 34273456 DOI: 10.1016/j.pneurobio.2021.102121] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/20/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
The brain is capable of integrating signals from multiple sensory modalities. Such multisensory integration can occur in areas that are commonly considered unisensory, such as planum temporale (PT) representing the auditory association cortex. However, the roles of different afferents (feedforward vs. feedback) to PT in multisensory processing are not well understood. Our study aims to understand that by examining laminar activity patterns in different topographical subfields of human PT under unimodal and multisensory stimuli. To this end, we adopted an advanced mesoscopic (sub-millimeter) fMRI methodology at 7 T by acquiring BOLD (blood-oxygen-level-dependent contrast, which has higher sensitivity) and VAPER (integrated blood volume and perfusion contrast, which has superior laminar specificity) signal concurrently, and performed all analyses in native fMRI space benefiting from an identical acquisition between functional and anatomical images. We found a division of function between visual and auditory processing in PT and distinct feedback mechanisms in different subareas. Specifically, anterior PT was activated more by auditory inputs and received feedback modulation in superficial layers. This feedback depended on task performance and likely arose from top-down influences from higher-order multimodal areas. In contrast, posterior PT was preferentially activated by visual inputs and received visual feedback in both superficial and deep layers, which is likely projected directly from the early visual cortex. Together, these findings provide novel insights into the mechanism of multisensory interaction in human PT at the mesoscopic spatial scale.
Collapse
Affiliation(s)
- Yuhui Chai
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Tina T Liu
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Sean Marrett
- Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Linqing Li
- Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Arman Khojandi
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Daniel A Handwerker
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Arjen Alink
- University Medical Centre Hamburg-Eppendorf, Department of Systems Neuroscience, Hamburg, Germany
| | - Lars Muckli
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Peter A Bandettini
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA; Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
27
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal personality traits and multisensory integration: An investigation using the McGurk effect. Acta Psychol (Amst) 2021; 218:103354. [PMID: 34174491 DOI: 10.1016/j.actpsy.2021.103354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/14/2022] Open
Abstract
Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, in whom multisensory integration differences have been consistently found. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we used the McGurk task as a measure of multisensory integration. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with decreased multisensory integration of speech. Surprisingly, Unusual Perceptual Experiences were not associated with multisensory integration. However, Odd Speech was associated with multisensory integration, and this association extended more broadly across the Disorganized factor of the SPQ, including Odd or Eccentric Behaviour. Individuals with higher levels of Odd or Eccentric Behaviour scores also demonstrated poorer lip-reading abilities, which partially explained performance in the McGurk task. This suggests that aberrant perceptual processes affecting individuals across the schizophrenia spectrum may relate to disorganized symptomatology.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
28
|
Tierney A, Gomez JC, Fedele O, Kirkham NZ. Reading ability in children relates to rhythm perception across modalities. J Exp Child Psychol 2021; 210:105196. [PMID: 34090237 DOI: 10.1016/j.jecp.2021.105196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 03/02/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
The onset of reading ability is rife with individual differences, with some children termed "early readers" and some falling behind from the very beginning. Reading skill in children has been linked to an ability to remember nonverbal rhythms, specifically in the auditory modality. It has been hypothesized that the link between rhythm skills and reading reflects a shared reliance on the ability to extract temporal structure from sound. Here we tested this hypothesis by investigating whether the link between rhythm memory and reading depends on the modality in which rhythms are presented. We tested 75 primary school-aged children aged 7-11 years on a within-participants battery of reading and rhythm tasks. Participants received a reading efficiency task followed by three rhythm tasks (auditory, visual, and audiovisual). Results showed that children who performed poorly on the reading task also performed poorly on the tasks that required them to remember and repeat back nonverbal rhythms. In addition, these children showed a rhythmic deficit not just in the auditory domain but also in the visual domain. However, auditory rhythm memory explained additional variance in reading ability even after controlling for visual memory. These results suggest that reading ability and rhythm memory rely both on shared modality-general cognitive processes and on the ability to perceive the temporal structure of sound.
Collapse
Affiliation(s)
- Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK.
| | - Jessica Cardona Gomez
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| | - Oliver Fedele
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| | - Natasha Z Kirkham
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK.
| |
Collapse
|
29
|
Manfredi M, Cohn N, Ribeiro B, Sanchez Pinho P, Fernandes Rodrigues Pereira E, Boggio PS. The electrophysiology of audiovisual processing in visual narratives in adolescents with autism spectrum disorder. Brain Cogn 2021; 151:105730. [PMID: 33892434 DOI: 10.1016/j.bandc.2021.105730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 02/15/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022]
Abstract
We investigated the semantic processing of the multimodal audiovisual combination of visual narratives with auditory descriptive words and auditory sounds in individuals with ASD. To this aim, we recorded ERPs to critical auditory words and sounds associated with events in visual narrative that were either semantically congruent or incongruent with the climactic visual event. A similar N400 effect was found both in adolescents with ASD and neurotypical adolescents (ages 9-16) when accessing different types of auditory information (i.e. words and sounds) into a visual narrative. This result might suggest that verbal information processing in ASD adolescents could be facilitated by direct association with meaningful visual information. In addition, we observed differences in scalp distribution of later brain responses between ASD and neurotypical adolescents. This finding might suggest ASD adolescents differ from neurotypical adolescents during the processing of the multimodal combination of visual narratives with auditory information at later stages of the process. In conclusion, the semantic processing of verbal information, typically impaired in individuals with ASD, can be facilitated when embedded into a meaningful visual information.
Collapse
Affiliation(s)
- Mirella Manfredi
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil; Department of Psychology, University of Zurich, Zurich, Switzerland.
| | - Neil Cohn
- Department of Communication and Cognition, Tilburg University, Tilburg, Netherlands
| | - Beatriz Ribeiro
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| | - Pamella Sanchez Pinho
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| | | | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| |
Collapse
|
30
|
Neto LP, Godoy IRB, Yamada AF, Carrete H, Jasinowodolinski D, Skaf A. Evaluation of Audiovisual Reports to Enhance Traditional Emergency Musculoskeletal Radiology Reports. J Digit Imaging 2021; 32:1081-1088. [PMID: 31432299 DOI: 10.1007/s10278-019-00261-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Traditional radiology reports are narrative texts that include a description of imaging findings. Recent implementation of advanced reporting software allows for incorporation of annotated key images and hyperlinks directly into text reports, but these tools usually do not substitute in-person consultations with radiologists, especially in challenging cases. Use of on-demand audio/visual reports with screen capture software is an emerging technology, providing a more engaged imaging service. Our study evaluates a video reporting tool that utilizes PACS integrated screen capture software for musculoskeletal imaging studies in the emergency department. Our hypothesis is that referring orthopedic surgeons would find that recorded audio/video reports add value to conventional reports, may increase engagement with radiology staff, and also facilitate understanding of imaging findings from urgent musculoskeletal cases. Seven radiologists prepared a total of 47 audiovisual reports for 9 attending orthopedic surgeons from the emergency department. We applied two surveys to evaluate the experience of the referring physicians using audio/visual reports as a complementary material from the conventional text report. Positive responses were statistically significant in most questions including: if the clinical suspicion was answered in the video; willingness to use such technology in other cases; if the audiovisual report made the imaging findings more understandable than the traditional report; and if the audiovisual report is faster to understand than the traditional text report. Use of audiovisual reports in emergency musculoskeletal cases is a new approach to evaluate potentially challenging cases. These results support the potential of this technology to re-establish the radiologist's role as an essential member of patient care and also provide more engaging, precise, and personalized reports. Further studies could streamline these methods in order to minimize work redundancy with traditional text reporting or even evaluate acceptance of using only audiovisual radiology reports. Additionally, widespread adoption would require integration with the entire radiology workflow including non-urgent cases and other medical specialties.
Collapse
Affiliation(s)
- Luís Pecci Neto
- Department of Radiology, Hospital do Coração (HCor) and Teleimagem, Rua Desembargador Eliseu Guilherme, 53, 7th Floor, São Paulo, SP, 04004-030, Brazil.,Department of Diagnostic Imaging, Federal University of São Paulo (UNIFESP), São Paulo, SP, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| | - Ivan R B Godoy
- Department of Radiology, Hospital do Coração (HCor) and Teleimagem, Rua Desembargador Eliseu Guilherme, 53, 7th Floor, São Paulo, SP, 04004-030, Brazil. .,Department of Diagnostic Imaging, Federal University of São Paulo (UNIFESP), São Paulo, SP, Brazil.
| | - André Fukunishi Yamada
- Department of Radiology, Hospital do Coração (HCor) and Teleimagem, Rua Desembargador Eliseu Guilherme, 53, 7th Floor, São Paulo, SP, 04004-030, Brazil.,Department of Diagnostic Imaging, Federal University of São Paulo (UNIFESP), São Paulo, SP, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| | - Henrique Carrete
- Department of Diagnostic Imaging, Federal University of São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - Dany Jasinowodolinski
- Department of Radiology, Hospital do Coração (HCor) and Teleimagem, Rua Desembargador Eliseu Guilherme, 53, 7th Floor, São Paulo, SP, 04004-030, Brazil
| | - Abdalla Skaf
- Department of Radiology, Hospital do Coração (HCor) and Teleimagem, Rua Desembargador Eliseu Guilherme, 53, 7th Floor, São Paulo, SP, 04004-030, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| |
Collapse
|
31
|
Kawakami S, Uono S, Otsuka S, Yoshimura S, Zhao S, Toichi M. Atypical Multisensory Integration and the Temporal Binding Window in Autism Spectrum Disorder. J Autism Dev Disord 2020; 50:3944-56. [PMID: 32211988 DOI: 10.1007/s10803-020-04452-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
32
|
de Boer MJ, Jürgens T, Cornelissen FW, Başkent D. Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation. Vision Res 2020; 180:51-62. [PMID: 33360918 DOI: 10.1016/j.visres.2020.12.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 11/06/2020] [Accepted: 12/06/2020] [Indexed: 10/22/2022]
Abstract
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Collapse
Affiliation(s)
- Minke J de Boer
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Tim Jürgens
- Institute of Acoustics, Technische Hochschule Lübeck, Lübeck, Germany
| | - Frans W Cornelissen
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
33
|
Ainsworth K, Ostrolenk A, Irion C, Bertone A. Reduced multisensory facilitation exists at different periods of development in autism. Cortex 2020; 134:195-206. [PMID: 33291045 DOI: 10.1016/j.cortex.2020.09.031] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/21/2020] [Accepted: 09/07/2020] [Indexed: 12/16/2022]
Abstract
Atypical sensory processing is now recognised as a key component of an autism diagnosis. The integration of multiple sensory inputs (multisensory integration (MSI)) is thought to be idiosyncratic in autistic individuals and may have cascading effects on the development of higher-level skills such as social communication. Multisensory facilitation was assessed using a target detection paradigm in 45 autistic and 111 neurotypical individuals, matched on age and IQ. Target stimuli were: auditory (A; 3500 Hz tone), visual (V; white disk 'flash') or audiovisual (AV; simultaneous tone and flash), and were presented on a dark background in a randomized order with varying stimulus onset delays. Reaction time (RT) was recorded via button press. In order to assess possible developmental effects, participants were divided into younger (age 14 or younger) and older (age 15 and older) groups. Redundancy gain (RG) was significantly greater in neurotypical, compared to autistic individuals. No significant effect of age or interaction was found. Race model analysis was used to compute a bound value that represented the facilitation effect provided by MSI. Our results revealed that MSI facilitation occurred (violation of the race model) in neurotypical individuals, with more efficient MSI in older participants. In both the younger and older autistic groups, we found reduced MSI facilitation (no or limited violation of the race model). Autistic participants showed reduced multisensory facilitation compared to neurotypical participants in a simple target detection task, void of social context. This remained consistent across age. Our results support evidence that autistic individuals may not integrate low-level, non-social information in a typical fashion, adding to the growing discussion around the influential effect that basic perceptual atypicalities may have on the development of higher-level, core aspects of autism.
Collapse
Affiliation(s)
- Kirsty Ainsworth
- Perceptual Neuroscience Laboratory for Autism and Development (PNLab), McGill University, Montreal, Canada; Department of Educational and Counselling Psychology, McGill University, Montreal, Canada.
| | - Alexia Ostrolenk
- Perceptual Neuroscience Laboratory for Autism and Development (PNLab), McGill University, Montreal, Canada; University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM), Montreal, Canada
| | | | - Armando Bertone
- Perceptual Neuroscience Laboratory for Autism and Development (PNLab), McGill University, Montreal, Canada; Department of Educational and Counselling Psychology, McGill University, Montreal, Canada; University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM), Montreal, Canada
| |
Collapse
|
34
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal traits are not related to multisensory integration or audiovisual speech perception. Conscious Cogn 2020; 86:103030. [PMID: 33120291 DOI: 10.1016/j.concog.2020.103030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 09/02/2020] [Accepted: 10/04/2020] [Indexed: 12/01/2022]
Abstract
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
35
|
Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: A causal inference explanation. Cortex 2020; 133:371-383. [PMID: 33221701 DOI: 10.1016/j.cortex.2020.10.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 08/05/2020] [Accepted: 10/05/2020] [Indexed: 11/25/2022]
Abstract
The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers. Second, across observers there is low correlation between McGurk susceptibility and recognition of visual speech paired with auditory speech-in-noise, another common measure of multisensory integration. Using the framework of the causal inference of multisensory speech (CIMS) model, we explored the relationship between the McGurk effect, syllable perception, and sentence perception in seven experiments with a total of 296 different participants. Perceptual reports revealed a relationship between the efficacy of different McGurk stimuli created from the same talker and perception of the auditory component of the McGurk stimuli presented in isolation, both with and without added noise. The CIMS model explained this strong stimulus-level correlation using the principles of noisy sensory encoding followed by optimal cue combination within a common representational space across speech types. Because the McGurk effect (but not speech-in-noise) requires the resolution of conflicting cues between modalities, there is an additional source of individual variability that can explain the weak observer-level correlation between McGurk and noisy speech. Power calculations show that detecting this weak correlation requires studies with many more participants than those conducted to-date. Perception of the McGurk effect and other types of speech can be explained by a common theoretical framework that includes causal inference, suggesting that the McGurk effect is a valid and useful experimental tool.
Collapse
|
36
|
Zhou HY, Yang HX, Shi LJ, Lui SSY, Cheung EFC, Chan RCK. Correlations Between Audiovisual Temporal Processing and Sensory Responsiveness in Adolescents with Autistic Traits. J Autism Dev Disord 2021; 51:2450-60. [PMID: 32978707 DOI: 10.1007/s10803-020-04724-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Atypical sensory processing has recently gained much research interest as a key domain of autistic symptoms. Individuals with autism spectrum disorder (ASD) exhibit difficulties in processing the temporal aspects of sensory inputs, and show altered behavioural responses to sensory stimuli (i.e., sensory responsiveness). The present study examined the relation between sensory responsiveness (assessed by the Adult/Adolescent Sensory Profile) and audiovisual temporal integration (measured by unisensory temporal order judgement (TOJ) tasks and audiovisual simultaneity judgement (SJ) tasks) in typically-developing adolescents (n = 94). We found that adolescents with higher levels of autistic traits exhibited more difficulties in separating visual stimuli in time (i.e., larger visual TOJ threshold) and showed a stronger bias to perceive sound-leading audiovisual pairings as simultaneous. Regarding the associations between different measures of sensory function, reduced visual temporal acuity, but not auditory or multisensory temporal processing, was significantly correlated with more atypical patterns of sensory responsiveness. Furthermore, the positive correlation between visual TOJ thresholds and sensory avoidance was only found in adolescents with relatively high levels of autistic traits, but not in those with relatively low levels of autistic traits. These findings suggest that reduced visual temporal acuity may contribute to altered sensory experiences and may be linked to broader behavioural characteristics of ASD.
Collapse
|
37
|
Abstract
Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2-58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3-98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.
Collapse
|
38
|
Abstract
When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.
Collapse
Affiliation(s)
- Shruti Ullas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands.
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands
| | - Frank Eisner
- Donders Centre for Cognition, Radboud University Nijmegen, 6500 AH, Nijmegen, The Netherlands
| | - Anne Cutler
- MARCS Institute and ARC Centre of Excellence for the Dynamics of Language, Western Sydney University, Penrith, NSW, 2751, Australia
| |
Collapse
|
39
|
Xu W, Kolozsvari OB, Oostenveld R, Hämäläinen JA. Rapid changes in brain activity during learning of grapheme-phoneme associations in adults. Neuroimage 2020; 220:117058. [PMID: 32561476 DOI: 10.1016/j.neuroimage.2020.117058] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 06/11/2020] [Accepted: 06/12/2020] [Indexed: 02/06/2023] Open
Abstract
Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 min; second day ~ 25 min), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior-temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Orsolya Beatrix Kolozsvari
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands; NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Jarmo Arvid Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| |
Collapse
|
40
|
Proulx MJ, Brown DJ, Lloyd-Esenkaya T, Leveson JB, Todorov OS, Watson SH, de Sousa AA. Visual-to-auditory sensory substitution alters language asymmetry in both sighted novices and experienced visually impaired users. Appl Ergon 2020; 85:103072. [PMID: 32174360 DOI: 10.1016/j.apergo.2020.103072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 12/05/2019] [Accepted: 02/01/2020] [Indexed: 06/10/2023]
Abstract
Visual-to-auditory sensory substitution devices (SSDs) provide improved access to the visual environment for the visually impaired by converting images into auditory information. Research is lacking on the mechanisms involved in processing data that is perceived through one sensory modality, but directly associated with a source in a different sensory modality. This is important because SSDs that use auditory displays could involve binaural presentation requiring both ear canals, or monaural presentation requiring only one - but which ear would be ideal? SSDs may be similar to reading, as an image (printed word) is converted into sound (when read aloud). Reading, and language more generally, are typically lateralised to the left cerebral hemisphere. Yet, unlike symbolic written language, SSDs convert images to sound based on visuospatial properties, with the right cerebral hemisphere potentially having a role in processing such visuospatial data. Here we investigated whether there is a hemispheric bias in the processing of visual-to-auditory sensory substitution information and whether that varies as a function of experience and visual ability. We assessed the lateralization of auditory processing with two tests: a standard dichotic listening test and a novel dichotic listening test created using the auditory information produced by an SSD, The vOICe. Participants were tested either in the lab or online with the same stimuli. We did not find a hemispheric bias in the processing of visual-to-auditory information in visually impaired, experienced vOICe users. Further, we did not find any difference between visually impaired, experienced vOICe users and sighted novices in the hemispheric lateralization of visual-to-auditory information processing. Although standard dichotic listening is lateralised to the left hemisphere, the auditory processing of images in SSDs is bilateral, possibly due to the increased influence of right hemisphere processing. Auditory SSDs might therefore be equally effective with presentation to either ear if a monaural, rather than binaural, presentation were necessary.
Collapse
Affiliation(s)
- Michael J Proulx
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - David J Brown
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Department of Computer Science, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Jack Barnett Leveson
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St. Lucia, QLD, 4072, Australia
| | - Samuel H Watson
- Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Alexandra A de Sousa
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK.
| |
Collapse
|
41
|
Broadbent H, Osborne T, Mareschal D, Kirkham N. Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-cross-sensory cues in children's incidental category learning. Cognition 2020; 199:104202. [PMID: 32087397 DOI: 10.1016/j.cognition.2020.104202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/09/2020] [Accepted: 01/22/2020] [Indexed: 10/25/2022]
Abstract
Simultaneous presentation of multisensory cues has been found to facilitate children's learning to a greater extent than unisensory cues (e.g., Broadbent, White, Mareschal, & Kirkham, 2017). Current research into children's multisensory learning, however, does not address whether these findings are due to having multiple cross-sensory cues that enhance stimuli perception or a matter of having multiple cues, regardless of modality, that are informative to category membership. The current study examined the role of multiple cross-sensory cues (e.g., audio-visual) compared to multiple intra-sensory cues (e.g., two visual cues) on children's incidental category learning. On a computerized incidental category learning task, children aged six to ten years (N = 454) were allocated to either a visual-only (V: unisensory), auditory-only (A: unisensory), audio-visual (AV: multisensory), Visual-Visual (VV: multi-cue) or Auditory-Auditory (AA: multi-cue) condition. In children over eight years of age, the availability of two informative cues, regardless of whether they had been presented across two different modalities or within the same modality, was found to be more beneficial to incidental learning than with unisensory cues. In six-year-olds, however, the presence of multiple auditory cues (AA) did not facilitate learning to the same extent as multiple visual cues (VV) or when cues were presented across two different modalities (AV). The findings suggest that multiple sensory cues presented across or within modalities may have differential effects on children's incidental learning across middle childhood, depending on the sensory domain in which they are presented. Implications for the use of multi-cross-sensory and multiple-intra-sensory cues for children's learning across this age range are discussed.
Collapse
Affiliation(s)
- H Broadbent
- Royal Holloway, University of London, United Kingdom of Great Britain and Northern Ireland; Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland.
| | - T Osborne
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - D Mareschal
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - N Kirkham
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
42
|
La Rocca D, Ciuciu P, Engemann DA, van Wassenhove V. Emergence of β and γ networks following multisensory training. Neuroimage 2020; 206:116313. [PMID: 31676416 PMCID: PMC7355235 DOI: 10.1016/j.neuroimage.2019.116313] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/22/2019] [Accepted: 10/23/2019] [Indexed: 12/31/2022] Open
Abstract
Our perceptual reality relies on inferences about the causal structure of the world given by multiple sensory inputs. In ecological settings, multisensory events that cohere in time and space benefit inferential processes: hearing and seeing a speaker enhances speech comprehension, and the acoustic changes of flapping wings naturally pace the motion of a flock of birds. Here, we asked how a few minutes of (multi)sensory training could shape cortical interactions in a subsequent unisensory perceptual task. For this, we investigated oscillatory activity and functional connectivity as a function of individuals' sensory history during training. Human participants performed a visual motion coherence discrimination task while being recorded with magnetoencephalography. Three groups of participants performed the same task with visual stimuli only, while listening to acoustic textures temporally comodulated with the strength of visual motion coherence, or with auditory noise uncorrelated with visual motion. The functional connectivity patterns before and after training were contrasted to resting-state networks to assess the variability of common task-relevant networks, and the emergence of new functional interactions as a function of sensory history. One major finding is the emergence of a large-scale synchronization in the high γ (gamma: 60-120Hz) and β (beta: 15-30Hz) bands for individuals who underwent comodulated multisensory training. The post-training network involved prefrontal, parietal, and visual cortices. Our results suggest that the integration of evidence and decision-making strategies become more efficient following congruent multisensory training through plasticity in network routing and oscillatory regimes.
Collapse
Affiliation(s)
- Daria La Rocca
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Philippe Ciuciu
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Denis-Alexander Engemann
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Virginie van Wassenhove
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Cognitive Neuroimaging Unit, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191, Gif-sur-Yvette, France.
| |
Collapse
|
43
|
Koticha P, Katge F, Shetty S, Patil DP. Effectiveness of Virtual Reality Eyeglasses as a Distraction Aid to Reduce Anxiety among 6-10-year-old Children Undergoing Dental Extraction Procedure. Int J Clin Pediatr Dent 2019; 12:297-302. [PMID: 31866714 PMCID: PMC6898869 DOI: 10.5005/jp-journals-10005-1640] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Introduction Distraction is commonly used nonpharmacologic pain management technique by pedodontists to manage pain and anxiety. There are some new techniques which uses audiovideo stimulation and distract the patient by exposing him or her to three-dimensional videos. These techniques are referred to as virtual reality audiovisual systems. The aim is to evaluate the effectiveness of virtual reality eyeglasses as a distraction aid to reduce anxiety of children undergoing extraction procedure. Objective The aim of this study is to evaluate the effectiveness of virtual reality eyeglasses as a distraction aid to reduce anxiety of children undergoing dental extraction procedure. Materials and methods Thirty children of age 6–10 years (n = 60) with bilateral carious primary molars indicated for extraction were randomly selected and divided into two groups of 30 each. The first one was group I (VR group) (n = 30) and group II (control group) (n = 30). Anxiety was measured by using Venham's picture test, pulse rate and oxygen saturation. Anxiety level between group I and group II was assessed using paired “t” test. Results The mean pulse rate values after extraction procedure in group I were 107.833 ± 1.356 and group II were 108.4 ± 0.927 respectively. The pulse rate values in intergroup comparison were found statistically significant p = 0.03. Conclusion The virtual reality used as a distraction technique improves the physiologic parameters of children aged 6–10 years but does not reduce the patient's self-reported anxiety according to Venham's picture test used. How to cite this article Koticha P, Katge F, et al. Effectiveness of Virtual Reality Eyeglasses as a Distraction Aid to Reduce Anxiety among 6–10-year-old Children Undergoing Dental Extraction Procedure. Int J Clin Pediatr Dent 2019;12(4):297–302.
Collapse
Affiliation(s)
- Paloni Koticha
- Department of Pedodontics and Preventive Dentistry, Terna Dental College, Navi Mumbai, Maharashtra, India
| | - Farhin Katge
- Department of Pedodontics and Preventive Dentistry, Terna Dental College, Navi Mumbai, Maharashtra, India
| | - Shilpa Shetty
- Department of Pedodontics and Preventive Dentistry, Terna Dental College, Navi Mumbai, Maharashtra, India
| | - Devendra P Patil
- Department of Pedodontics and Preventive Dentistry, Terna Dental College, Navi Mumbai, Maharashtra, India
| |
Collapse
|
44
|
Jessen S, Fiedler L, Münte TF, Obleser J. Quantifying the individual auditory and visual brain response in 7-month-old infants watching a brief cartoon movie. Neuroimage 2019; 202:116060. [PMID: 31362048 DOI: 10.1016/j.neuroimage.2019.116060] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 07/05/2019] [Accepted: 07/26/2019] [Indexed: 11/16/2022] Open
Abstract
Electroencephalography (EEG) continues to be the most popular method to investigate cognitive brain mechanisms in young children and infants. Most infant studies rely on the well-established and easy-to-use event-related brain potential (ERP). As a severe disadvantage, ERP computation requires a large number of repetitions of items from the same stimulus-category, compromising both ERPs' reliability and their ecological validity in infant research. We here explore a way to investigate infant continuous EEG responses to an ongoing, engaging signal (i.e., "neural tracking") by using multivariate temporal response functions (mTRFs), an approach increasingly popular in adult EEG research. N = 52 infants watched a 5-min episode of an age-appropriate cartoon while the EEG signal was recorded. We estimated and validated forward encoding models of auditory-envelope and visual-motion features. We compared individual and group-based ('generic') models of the infant brain response to comparison data from N = 28 adults. The generic model yielded clearly defined response functions for both, the auditory and the motion regressor. Importantly, this response profile was present also on an individual level, albeit with lower precision of the estimate but above-chance predictive accuracy for the modelled individual brain responses. In sum, we demonstrate that mTRFs are a feasible way of analyzing continuous EEG responses in infants. We observe robust response estimates both across and within participants from only 5 min of recorded EEG signal. Our results open ways for incorporating more engaging and more ecologically valid stimulus materials when probing cognitive, perceptual, and affective processes in infants and young children.
Collapse
Affiliation(s)
- Sarah Jessen
- Department of Neurology, University of Lübeck, Lübeck, Germany.
| | - Lorenz Fiedler
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Thomas F Münte
- Department of Neurology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
45
|
Gabr RE, Zunta-Soares GB, Soares JC, Narayana PA. MRI acoustic noise-modulated computer animations for patient distraction and entertainment with application in pediatric psychiatric patients. Magn Reson Imaging 2019; 61:16-9. [PMID: 31078614 DOI: 10.1016/j.mri.2019.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 05/06/2019] [Accepted: 05/06/2019] [Indexed: 11/20/2022]
Abstract
PURPOSE To reduce patient anxiety caused by the MRI scanner acoustic noise. MATERIAL AND METHODS We developed a simple and low-cost system for patient distraction using visual computer animations that were synchronized to the MRI scanner's acoustic noise during the MRI exam. The system was implemented on a 3T MRI system and tested in 28 pediatric patients with bipolar disorder. The patients were randomized to receive noise-synchronized animations in the form of abstract animations in addition to music (n = 13, F/M = 6/7, age = 10.9 ± 2.5 years) or, as a control, receive only music (n = 15, F/M = 7/8, age = 11.6 ± 2.3 years). After completion of the scans, all subjects answered a questionnaire about their scan experience and the perceived scan duration. RESULTS The scan duration with multisensory input (animations and music) was perceived to be ~15% shorter than in the control group (43 min vs. 50 min, P < 0.05). However, the overall scan experience was scored less favorably (3.9 vs. 4.6 in the control group, P < 0.04). CONCLUSIONS This simple system provided patient distraction and entertainment leading to perceived shorter scan times, but the provided visualization with abstract animations was not favored by this patient cohort.
Collapse
|
46
|
Abstract
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music-dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant's own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant's own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one's own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.
Collapse
|
47
|
Feldman JI, Kuang W, Conrad JG, Tu A, Santapuram P, Simon DM, Foss-Feig JH, Kwakye LD, Stevenson RA, Wallace MT, Woynaroski TG. Brief Report: Differences in Multisensory Integration Covary with Sensory Responsiveness in Children with and without Autism Spectrum Disorder. J Autism Dev Disord 2019; 49:397-403. [PMID: 30043353 DOI: 10.1007/s10803-018-3667-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Research shows that children with autism spectrum disorder (ASD) differ in their behavioral patterns of responding to sensory stimuli (i.e., sensory responsiveness) and in various other aspects of sensory functioning relative to typical peers. This study explored relations between measures of sensory responsiveness and multisensory speech perception and integration in children with and without ASD. Participants were 8-17 year old children, 18 with ASD and 18 matched typically developing controls. Participants completed a psychophysical speech perception task, and parents reported on children's sensory responsiveness. Psychophysical measures (e.g., audiovisual accuracy, temporal binding window) were associated with patterns of sensory responsiveness (e.g., hyporesponsiveness, sensory seeking). Results indicate that differences in multisensory speech perception and integration covary with atypical patterns of sensory responsiveness.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Wayne Kuang
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Julie G Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Pooja Santapuram
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - David M Simon
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Jennifer H Foss-Feig
- Department of Psychiatry, Seaver Autism Center for Research and Treatment at the Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | - Ryan A Stevenson
- Department of Psychology, The University of Western Ontario, London, ON, Canada.,Brain and Mind Institute, The University of Western Ontario, London, ON, Canada.,Department of Psychiatry, The Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada.,Program in Neuroscience, The Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada.,York University Centre for Vision Research, York University, Toronto, ON, Canada
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.,Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Psychology, Vanderbilt University, Nashville, TN, USA.,Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA. .,Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA. .,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
48
|
Lunn J, Sjoblom A, Ward J, Soto-Faraco S, Forster S. Multisensory enhancement of attention depends on whether you are already paying attention. Cognition 2019; 187:38-49. [PMID: 30825813 DOI: 10.1016/j.cognition.2019.02.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/12/2019] [Accepted: 02/13/2019] [Indexed: 10/27/2022]
Abstract
Multisensory stimuli are argued to capture attention more effectively than unisensory stimuli due to their ability to elicit a super-additive neuronal response. However, behavioural evidence for enhanced multisensory attentional capture is mixed. Furthermore, the notion of multisensory enhancement of attention conflicts with findings suggesting that multisensory integration may itself be dependent upon top-down attention. The present research resolves this discrepancy by examining how both endogenous attentional settings and the availability of attentional capacity modulate capture by multisensory stimuli. Across a series of four studies, two measures of attentional capture were used which vary in their reliance on endogenous attention: facilitation and distraction. Perceptual load was additionally manipulated to determine whether multisensory stimuli are still able to capture attention when attention is occupied by a demanding primary task. Multisensory stimuli presented as search targets were consistently detected faster than unisensory stimuli regardless of perceptual load, although they are nevertheless subject to load modulation. In contrast, task irrelevant multisensory stimuli did not cause greater distraction than unisensory stimuli, suggesting that the enhanced attentional status of multisensory stimuli may be mediated by the availability of endogenous attention. Implications for multisensory alerts in practical settings such as driving and aviation are discussed, namely that these may be advantageous during demanding tasks, but may be less suitable to signaling unexpected events.
Collapse
|
49
|
Barutchu A, Sahu A, Humphreys GW, Spence C. Multisensory processing in event-based prospective memory. Acta Psychol (Amst) 2019; 192:23-30. [PMID: 30391627 DOI: 10.1016/j.actpsy.2018.10.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 08/29/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022] Open
Abstract
Failures in prospective memory (PM) - that is, the failure to remember intended future actions - can have adverse consequences. It is therefore important to study those processes that may help to minimize such cognitive failures. Although multisensory integration has been shown to enhance a wide variety of behaviors, including perception, learning, and memory, its effect on prospective memory, in particular, is largely unknown. In the present study, we investigated the effects of multisensory processing on two simultaneously-performed memory tasks: An ongoing 2- or 3-back working memory (WM) task (20% target ratio), and a PM task in which the participants had to respond to a rare predefined letter (8% target ratio). For PM trials, multisensory enhancement was observed for congruent multisensory signals; however, this effect did not generalize to the ongoing WM task. Participants were less likely to make errors for PM than for WM trials, thus suggesting that they may have biased their attention toward the PM task. Multisensory advantages on memory tasks, such as PM and WM, may be dependent on how attention resources are allocated across dual tasks.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, United Kingdom.
| | - Aparna Sahu
- Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
50
|
Ju A, Orchard-Mills E, van der Burg E, Alais D. Rapid Audiovisual Temporal Recalibration Generalises Across Spatial Location. Multisens Res 2019; 32:215-234. [PMID: 31071679 DOI: 10.1163/22134808-20191176] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 03/01/2019] [Indexed: 11/19/2022]
Abstract
Recent exposure to asynchronous multisensory signals has been shown to shift perceived timing between the sensory modalities, a phenomenon known as 'temporal recalibration'. Recently, Van der Burg et al. (2013, J Neurosci, 33, pp. 14633-14637) reported results showing that recalibration to asynchronous audiovisual events can happen extremely rapidly. In an extended series of variously asynchronous trials, simultaneity judgements were analysed based on the modality order in the preceding trial and showed that shifts in the point of subjective synchrony occurred almost instantaneously, shifting from one trial to the next. Here we replicate the finding that shifts in perceived timing occur following exposure to a single, asynchronous audiovisual stimulus and by manipulating the spatial location of the audiovisual events we demonstrate that recalibration occurs even when the adapting stimulus is presented in a different location. Timing shifts were also observed when the adapting audiovisual pair were defined only by temporal proximity, with the auditory component presented over headphones rather than being collocated with the visual stimulus. Combined with previous findings showing that timing shifts are independent of stimulus features such as colour and pitch, our finding that recalibration is not spatially specific provides strong evidence for a rapid recalibration process that is solely dependent on recent temporal information, regardless of feature or location. These rapid and automatic shifts in perceived synchrony may allow our sensory systems to flexibly adjust to the variation in timing of neural signals occurring as a result of delayed environmental transmission and differing neural latencies for processing vision and audition.
Collapse
Affiliation(s)
- Angela Ju
- 1Sydney School of Public Health, University of Sydney, New South Wales, Australia
| | | | - Erik van der Burg
- 2School of Psychology, University of Sydney, New South Wales, Australia.,3Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands.,4Institute Brain and Behaviour Amsterdam, The Netherlands
| | - David Alais
- 2School of Psychology, University of Sydney, New South Wales, Australia
| |
Collapse
|