1
|
Li X, Cai S, Chen Y, Tian X, Wang A. Enhancement of visual dominance effects at the response level in children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 242:105897. [PMID: 38461557 DOI: 10.1016/j.jecp.2024.105897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/16/2024] [Accepted: 02/16/2024] [Indexed: 03/12/2024]
Abstract
Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.
Collapse
Affiliation(s)
- Xin Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China.
| | - Xiaoming Tian
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Suzhou University of Science and Technology, Suzhou 215011, China.
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China.
| |
Collapse
|
2
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
3
|
Uno K, Yokosawa K. Does cross-modal correspondence modulate modality-specific perceptual processing? Study using timing judgment tasks. Atten Percept Psychophys 2024; 86:273-284. [PMID: 37932495 DOI: 10.3758/s13414-023-02812-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 11/08/2023]
Abstract
Cross-modal correspondences refer to associations between stimulus features across sensory modalities. Previous studies have shown that cross-modal correspondences modulate reaction times for detecting and identifying stimuli in one modality when uninformative stimuli from another modality are present. However, it is unclear whether such modulation reflects changes in modality-specific perceptual processing. We used two psychophysical timing judgment tasks to examine the effects of audiovisual correspondences on visual perceptual processing. In Experiment 1, we conducted a temporal order judgment (TOJ) task that asked participants to judge which of two visual stimuli presented with various stimulus onset asynchronies (SOAs) appeared first. In Experiment 2, we conducted a simultaneous judgment (SJ) task that asked participants to report whether the two visual stimuli were simultaneous or successive. We also presented an unrelated auditory stimulus, simultaneously or preceding the first visual stimulus, and manipulated the congruency between audiovisual stimuli. Experiment 1 indicated that the points of subjective simultaneity (PSSs) between the two visual stimuli estimated in the TOJ task shifted according to the audiovisual correspondence between the auditory pitch and visual features of vertical location and size. However, these audiovisual correspondences did not affect PSS estimated using the SJ task in Experiment 2. The different results of the two tasks can be explained through the response bias triggered by audiovisual correspondence that only the TOJ task included. We concluded that audiovisual correspondence would not modulate visual perceptual timing and that changes in modality-specific perceptual processing might not trigger the congruency effects reported in previous studies.
Collapse
Affiliation(s)
- Kyuto Uno
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
- Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan.
- Department of Psychology, Faculty of Human Sciences, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo, 102-8554, Japan.
| | - Kazuhiko Yokosawa
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- Tsukuba Gakuin University, 3-1 Azuma, Tsukuba-shi, Ibaraki, 305-0031, Japan
| |
Collapse
|
4
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
6
|
Li X, Tang X, Yang J, Wang A, Zhang M. Visual adaptation changes the susceptibility to the fission illusion. Atten Percept Psychophys 2023; 85:2046-2055. [PMID: 36949258 DOI: 10.3758/s13414-023-02686-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 03/24/2023]
Abstract
Sound-induced flash illusion (SiFI) is the illusion that participants perceive incorrectly that the number of visual flashes is equal to the number of auditory beeps when presented within 100 ms. Although previous studies found that repetition suppression can reduce an individual's perceptual sensitivity to the SiFI, there is not yet a consensus as to how visual adaptation affects the SiFI. In the present study, we added prolonged adapting visual stimuli prior to the presentation of audiovisual stimuli to investigate whether the bottom-up factor of adaptation affects the SiFI. The adapting visual stimuli consisted of one or two of the same visual stimuli that lasted for 2 minutes in succession, followed by the audiovisual stimuli. Both adaptation conditions showed SiFI effects. The accuracy of adapting double-flashes was significantly lower than that of in adapting a single flash for the fission illusion. Our analyses indicated that such a pattern could be attributed to a lower d' in adapting double-flashes than in adapting a single flash. However, the accuracy, discriminability and criterion were not significantly different between the two adaptation conditions because of the instability of the fusion illusion. Thus, the present study indicated that the reduced perceptual sensitivity based on visual adaptation could enhance the fission illusion in multisensory integration.
Collapse
Affiliation(s)
- Xin Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
7
|
Stanley BM, Chen YC, Maurer D, Lewis TL, Shore DI. Developmental changes in audiotactile event perception. J Exp Child Psychol 2023; 230:105629. [PMID: 36731280 DOI: 10.1016/j.jecp.2023.105629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 02/04/2023]
Abstract
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Collapse
Affiliation(s)
- Brendan M Stanley
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City 252, Taiwan
| | - Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Terri L Lewis
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - David I Shore
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada; Multisensory Perception Laboratory, Division of Multisensory Mind Inc., Hamilton, Ontario L8S 4K1, Canada.
| |
Collapse
|
8
|
O'Dowd A, Hirst RJ, Setti A, Donoghue OA, Kenny RA, Newell FN. The temporal precision of audiovisual integration is associated with longitudinal fall incidents but not sensorimotor fall risk in older adults. Sci Rep 2023; 13:7167. [PMID: 37137879 PMCID: PMC10156851 DOI: 10.1038/s41598-023-32404-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 03/27/2023] [Indexed: 05/05/2023] Open
Abstract
Sustained multisensory integration over long inter-stimulus time delays is typically found in older adults, particularly those with a history of falls. However, the extent to which the temporal precision of audio-visual integration is associated with longitudinal fall or fall risk trajectories is unknown. A large sample of older adults (N = 2319) were grouped into longitudinal trajectories of self-reported fall incidents (i.e., decrease, stable, or increase in number) and, separately, their performance on a standard, objective measure of fall risk, Timed Up and Go (TUG; stable, moderate decline, severe decline). Multisensory integration was measured once as susceptibility to the Sound-Induced Flash Illusion (SIFI) across three stimulus onset asynchronies (SOAs): 70 ms, 150 ms and 230 ms. Older adults with an increasing fall number showed a significantly different pattern of performance on the SIFI than non-fallers, depending on age: For adults with increasing incidents of falls, those aged 53-59 years showed a much smaller difference in illusion susceptibility at 70 ms versus 150 ms than those aged 70 + years. In contrast, non-fallers showed a more comparable difference between these SOA conditions across age groups. There was no association between TUG performance trajectories and SIFI susceptibility. These findings suggests that a fall event is associated with distinct temporal patterns of multisensory integration in ageing and have implications for our understanding of the mechanisms underpinning brain health in older age.
Collapse
Affiliation(s)
- Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Green, Dublin 2, D02 PN40, Ireland.
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Green, Dublin 2, D02 PN40, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Orna A Donoghue
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Green, Dublin 2, D02 PN40, Ireland
| |
Collapse
|
9
|
Malouka S, Loria T, Crainic V, Thaut MH, Tremblay L. Auditory cueing facilitates temporospatial accuracy of sequential movements. Hum Mov Sci 2023; 89:103087. [PMID: 37060619 DOI: 10.1016/j.humov.2023.103087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/01/2023] [Accepted: 04/04/2023] [Indexed: 04/17/2023]
Abstract
Effectively executing goal-directed behaviours requires both temporal and spatial accuracy. Previous work has shown that providing auditory cues enhances the timing of upper-limb movements. Interestingly, alternate work has shown beneficial effects of multisensory cueing (i.e., combined audiovisual) on temporospatial motor control. As a result, it is not clear whether adding visual to auditory cues can enhance the temporospatial control of sequential upper-limb movements specifically. The present study utilized a sequential pointing task to investigate the effects of auditory, visual, and audiovisual cueing on temporospatial errors. Eighteen participants performed pointing movements to five targets representing short, intermediate, and large movement amplitudes. Five isochronous auditory, visual, or audiovisual priming cues were provided to specify an equal movement duration for all amplitudes prior to movement onset. Movement time errors were then computed as the difference between actual and predicted movement times specified by the sensory cues, yielding delta movement time errors (ΔMTE). It was hypothesized that auditory-based (i.e., auditory and audiovisual) cueing would yield lower movement time errors compared to visual cueing. The results showed that providing auditory relative to visual priming cues alone reduced ΔMTE particularly for intermediate amplitude movements. The results further highlighted the beneficial impact of unimodal auditory cueing for improving visuomotor control in the absence of significant effects for the multisensory audiovisual condition.
Collapse
Affiliation(s)
- Selina Malouka
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| | - Tristan Loria
- Music and Health Research Collaboratory (MaHRC), Faculty of Music, University of Toronto, Toronto, Canada.
| | - Valentin Crainic
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| | - Michael H Thaut
- Music and Health Research Collaboratory (MaHRC), Faculty of Music, University of Toronto, Toronto, Canada.
| | - Luc Tremblay
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, Canada.
| |
Collapse
|
10
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
11
|
Sound-induced flash illusions at different spatial locations were affected by personality traits. Atten Percept Psychophys 2023; 85:463-473. [PMID: 36539573 DOI: 10.3758/s13414-022-02638-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2022] [Indexed: 12/24/2022]
Abstract
Sound-induced flash illusion (SiFI) is an auditory-dominated effect in which observers will misperceive the number of flashes due to simultaneously presented beeps, which includes fission and fusion illusions. Although several individual differences have been found in SiFI, little is known about the effect of personality traits. In the present study, we presented flashes in near space and beeps in far space (Vnear_Afar) and flashes in far space and beeps in near space (Vfar_Anear) to better approximate the real world. We collected 103 participants' Big Five questionnaire results and their SiFI task performance to investigate the difference in trait level on the SiFI in the performance of accuracy, d' and c. The results show that all five personality traits had certain effects on the SiFI to different degrees, and different personality traits played different roles in the fission illusion and fusion illusion. The high agreeableness group was more prone to the fission illusion, and the report criteria were less strict. The report criteria of the low neuroticism group were stricter for the fusion illusion. The extraversion, conscientiousness and low openness groups were more prone to the fusion illusion in the Vnear_Afar condition than in the Vfar_Anear condition. The study indicated that personality traits were important but easily overlooked factors in multisensory illusion, which might make a difference between the fission illusion and the fusion illusion.
Collapse
|
12
|
Long-term Tai Chi training reduces the fusion illusion in older adults. Exp Brain Res 2023; 241:517-526. [PMID: 36611123 DOI: 10.1007/s00221-023-06544-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 01/01/2023] [Indexed: 01/09/2023]
Abstract
Sound-induced flash illusion (SiFI) is an auditory-dominated audiovisual integration phenomenon that can be used as a reliable indicator of audiovisual integration. Although previous studies have found that Tai Chi exercise has a promoting effect on cognitive processing, such as executive functions, the effect of Tai Chi exercise on early perceptual processing has yet to be investigated. This study used the classic SiFI paradigm to investigate the effects of long-term Tai Chi exercise on multisensory integration in older adults. We compared older adults with long-term Tai Chi exercise experience with those with long-term walking exercise. The results showed that the accuracy of the Tai Chi group was higher than that of the control group under the fusion illusion condition, mainly due to the increased perceptual sensitivity to flashes. However, there was no significant difference between the two groups in the fission illusion. These results indicated that the fission and fusion illusions were affected differently by Tai Chi exercise, and this was attributable to the association of the participants' flash discriminability with them. The present study provides preliminary evidence that long-term Tai Chi exercise improves older adults' multisensory integration, which occurs in early perceptual processing.
Collapse
|
13
|
Chang C, Wang E, Yang J, Luan X, Wang A, Zhang M. Differences in eccentricity for sound-induced flash illusion in four visual fields. Perception 2023; 52:56-73. [PMID: 36397675 DOI: 10.1177/03010066221136670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
A sound-induced flash illusion (SiFI) is a multisensory illusion dominated by auditory stimuli, in which the individual perceives that the number of visual flashes is equal to the number of auditory stimuli when visual flashes are presented along with an unequal number of auditory stimuli. Although the mechanisms underlying fission and fusion illusions have been documented, there is not yet a consensus on how they vary according to the different eccentricities. In the present study, by incorporating the classic SiFI paradigm into four different eccentricities, we aimed to investigate whether the SiFI varies under the different eccentricities. The results showed that the fission illusion varied significantly across the four eccentricities, with the perifovea (7°) and peripheral (11°) illusions being greater than the fovea and parafovea (3°) illusions. In contrast, the fusion illusion did not vary significantly across the four eccentricities. Our findings revealed that SiFI was affected by different visual fields and that there were differences between the fission and the fusion illusions. Furthermore, by examining the SiFI of eccentricity across visual fields, this study also suggests that bottom-up factors affect the SiFI.
Collapse
Affiliation(s)
| | - Erlei Wang
- The Second Affiliated Hospital of Soochow University, China
| | | | | | | | - Ming Zhang
- 12582Soochow University, China; Okayama University, Japan
| |
Collapse
|
14
|
He Y, Yang T, He C, Sun K, Guo Y, Wang X, Bai L, Xue T, Xu T, Guo Q, Liao Y, Liu X, Wu S. Effects of audiovisual interactions on working memory: Use of the combined N-back + Go/NoGo paradigm. Front Psychol 2023; 14:1080788. [PMID: 36874804 PMCID: PMC9982107 DOI: 10.3389/fpsyg.2023.1080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/27/2023] [Indexed: 02/19/2023] Open
Abstract
Background Approximately 94% of sensory information acquired by humans originates from the visual and auditory channels. Such information can be temporarily stored and processed in working memory, but this system has limited capacity. Working memory plays an important role in higher cognitive functions and is controlled by central executive function. Therefore, elucidating the influence of the central executive function on information processing in working memory, such as in audiovisual integration, is of great scientific and practical importance. Purpose This study used a paradigm that combined N-back and Go/NoGo tasks, using simple Arabic numerals as stimuli, to investigate the effects of cognitive load (modulated by varying the magnitude of N) and audiovisual integration on the central executive function of working memory as well as their interaction. Methods Sixty college students aged 17-21 years were enrolled and performed both unimodal and bimodal tasks to evaluate the central executive function of working memory. The order of the three cognitive tasks was pseudorandomized, and a Latin square design was used to account for order effects. Finally, working memory performance, i.e., reaction time and accuracy, was compared between unimodal and bimodal tasks with repeated-measures analysis of variance (ANOVA). Results As cognitive load increased, the presence of auditory stimuli interfered with visual working memory by a moderate to large extent; similarly, as cognitive load increased, the presence of visual stimuli interfered with auditory working memory by a moderate to large effect size. Conclusion Our study supports the theory of competing resources, i.e., that visual and auditory information interfere with each other and that the magnitude of this interference is primarily related to cognitive load.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Tianqi Yang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Chunyan He
- Department of Nursing, Fourth Military Medical University, Xi'an, China
| | - Kewei Sun
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Lifeng Bai
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Ting Xue
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Tao Xu
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Qingjun Guo
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Yang Liao
- Air Force Medical Center, Air Force Medical University, Beijing, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
15
|
Zhou H, Liu X, Yu J, Yue C, Wang A, Zhang M. Compensation Mechanisms May Not Always Account for Enhanced Multisensory Illusion in Older Adults: Evidence from Sound-Induced Flash Illusion. Brain Sci 2022; 12:brainsci12101418. [PMID: 36291351 PMCID: PMC9599837 DOI: 10.3390/brainsci12101418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 10/14/2022] [Accepted: 10/20/2022] [Indexed: 11/16/2022] Open
Abstract
Sound-induced flash illusion (SiFI) is typical auditory dominance phenomenon in multisensory illusion. Although a number of studies have explored the SiFI in terms of age-related effects, the reasons for the enhanced SiFI in older adults are still controversial. In the present study, older and younger adults with equal visual discrimination were selected to explore age differences in SiFI effects, and to explore the neural indicators by resting-state functional magnetic resonance imaging (rs-fMRI) signals. A correlation analysis was calculated to examine the relationship between regional homogeneity (ReHo) and the SiFI. The results showed that both younger and older adults experienced significant fission and fusion illusions, and fission illusions of older adults were greater than that of younger adults. In addition, our results showed ReHo values of the left middle frontal gyrus (MFG), the right inferior frontal gyrus (IFG) and right superior frontal gyrus (SFG) were significantly positively correlated with the SiFI in older adults. More importantly, the comparison between older and younger adults showed that ReHo values of the right superior temporal gyrus (STG) decreased in older adults, and this was independent of the SiFI. The results indicated that when there was no difference in unisensory ability, the enhancement of multisensory illusion in older adults may not always be explained by compensation mechanisms.
Collapse
Affiliation(s)
- Heng Zhou
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Xiaole Liu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Junming Yu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Chunlin Yue
- School of Physical Education and Sport Science, Soochow University, Suzhou 215021, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
- Correspondence:
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| |
Collapse
|
16
|
Dual counterstream architecture may support separation between vision and predictions. Conscious Cogn 2022; 103:103375. [DOI: 10.1016/j.concog.2022.103375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/03/2021] [Accepted: 06/28/2022] [Indexed: 11/24/2022]
|
17
|
Sound-induced flash illusion is modulated by the depth of auditory stimuli: Evidence from younger and older adults. Atten Percept Psychophys 2022; 84:2040-2050. [DOI: 10.3758/s13414-022-02537-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/08/2022]
|
18
|
The magnitude of the sound-induced flash illusion does not increase monotonically as a function of visual stimulus eccentricity. Atten Percept Psychophys 2022; 84:1689-1698. [PMID: 35562629 PMCID: PMC9106326 DOI: 10.3758/s13414-022-02493-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/24/2022]
Abstract
The sound-induced flash illusion (SIFI) occurs when a rapidly presented visual stimulus is accompanied by two auditory stimuli, creating the illusory percept of two visual stimuli. While much research has focused on how the temporal proximity of the audiovisual stimuli impacts susceptibility to the illusion, comparatively less research has focused on the impact of spatial manipulations. Here, we aimed to assess whether manipulating the eccentricity of visual flash stimuli altered the properties of the temporal binding window associated with the SIFI. Twenty participants were required to report whether they perceived one or two flashes that were concurrently presented with one or two beeps. Visual stimuli were presented at one of four different retinal eccentricities (2.5, 5, 7.5, or 10 degrees below fixation) and audiovisual stimuli were separated by one of eight stimulus-onset asynchronies. In keeping with previous findings, increasing stimulus-onset asynchrony between the auditory and visual stimuli led to a marked decrease in susceptibility to the illusion allowing us to estimate the width and amplitude of the temporal binding window. However, varying the eccentricity of the visual stimulus had no effect on either the width or the peak amplitude of the temporal binding window, with a similar pattern of results observed for both the “fission” and “fusion” variants of the illusion. Thus, spatial manipulations of the audiovisual stimuli used to elicit the SIFI appear to have a weaker effect on the integration of sensory signals than temporal manipulations, a finding which has implications for neuroanatomical models of multisensory integration.
Collapse
|
19
|
De Winne J, Devos P, Leman M, Botteldooren D. With No Attention Specifically Directed to It, Rhythmic Sound Does Not Automatically Facilitate Visual Task Performance. Front Psychol 2022; 13:894366. [PMID: 35756201 PMCID: PMC9226390 DOI: 10.3389/fpsyg.2022.894366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
In a century where humans and machines—powered by artificial intelligence or not—increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. This paper explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. In turn, this can make the interaction between humans or between machines and humans more engaging, rewarding and activating. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance. It is also expected that this effect of sound is different in case of rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. To make correct conclusions, the data of the experiment was statistically analyzed in a classic way, but also predictive models were developed in order to predict outcomes based on a range of input variables related to the experiment and the participant. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended toward a negative effect. To better understand the underlying processes of attention, also brain activation data, e.g., by means of electroencephalography (EEG), should be recorded. This approach can be subject to a future work.
Collapse
Affiliation(s)
- Jorg De Winne
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium.,Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Paul Devos
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
20
|
Yu G, Liu C, Liu X, Wang A, Zhang M. Reward reduces the fission illusion in the sound-induced flash illusion. Perception 2022; 51:388-402. [DOI: 10.1177/03010066221093479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Pairing a single visual stimulus with multiple auditory stimuli will lead to the illusory perception of multiple visual stimuli, which is known as sound-induced flash illusion (SIFI). The present study adopted the classic SIFI paradigm to investigate whether value-associated tasks could affect the SIFI. By adjusting the sequence of reward and nonreward conditions, we also examined the effect of reward history on SIFI. The results showed that the fission illusion was reduced when associated with momentary reward, demonstrating significantly higher accuracy and discriminability than the nonreward condition. However, the fusion illusion was not affected by the momentary reward, and the explanation was that the fusion illusion was not as stable as the fission illusion and disappeared across different trials and conditions. Moreover, the robustness of reward history in the present study was not as strong as previous studies have suggested, indicating that the effect of sound on the perceptual representation of visual stimuli is strong and robust to reward history. These findings demonstrated that the reward could reduce the SIFI and broaden the existing dichotomy of SIFI. New evidence for the operation of value-driven attention mechanisms is also provided, suggesting that the underlying value-driven attention operates across multiple sensory systems.
Collapse
Affiliation(s)
- Gaoxin Yu
- Department of Psychology, Soochow University, Suzhou, China
| | - Chunmei Liu
- Jiangsu Provincial Key Constructive Laboratory for Big Data of Psychology and Cognitive Science, Yancheng Teachers University, Yancheng, China
| | - Xiaole Liu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Laboratory, Graduate School of Interdisciplinary Science and Engineering In Health Systems, Okayama University, Okayama, Japan
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Laboratory, Graduate School of Interdisciplinary Science and Engineering In Health Systems, Okayama University, Okayama, Japan
| |
Collapse
|
21
|
Noguchi Y. Individual differences in beta frequency correlate with the audio-visual fusion illusion. Psychophysiology 2022; 59:e14041. [PMID: 35274314 DOI: 10.1111/psyp.14041] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 12/27/2021] [Accepted: 02/22/2022] [Indexed: 11/29/2022]
Abstract
Presenting one flash with two beeps induces a perception of two flashes (audio-visual [AV] fission illusion), while presenting two flashes with one beep induces a perception of one flash (fusion illusion). Although previous studies showed a relationship between the frequency of the alpha rhythm (alpha cycle) and one's susceptibility to the fission illusion, the relationship between neural oscillations and the fusion illusion is unknown. Using electroencephalography, here I investigated the frequency of oscillatory signals in the pre-stimulus period and found a significant correlation between the beta rhythm and the fusion illusion; specifically, participants with a lower beta frequency showed a larger fusion illusion. These data indicate two separate time windows of AV integration in the human brain, one defined by the alpha cycle (fission) and another defined by the beta cycle (fusion).
Collapse
Affiliation(s)
- Yasuki Noguchi
- Department of Psychology, Graduate School of Humanities, Kobe University, Kobe, Japan
| |
Collapse
|
22
|
Marsicano G, Cerpelloni F, Melcher D, Ronconi L. Lower multisensory temporal acuity in individuals with high schizotypal traits: a web-based study. Sci Rep 2022; 12:2782. [PMID: 35177673 PMCID: PMC8854550 DOI: 10.1038/s41598-022-06503-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/25/2022] [Indexed: 12/02/2022] Open
Abstract
Natural events are often multisensory, requiring the brain to combine information from the same spatial location and timing, across different senses. The importance of temporal coincidence has led to the introduction of the temporal binding window (TBW) construct, defined as the time range within which multisensory inputs are highly likely to be perceptually bound into a single entity. Anomalies in TBWs have been linked to confused perceptual experiences and inaccurate filtering of sensory inputs coming from different environmental sources. Indeed, larger TBWs have been associated with disorders such as schizophrenia and autism and are also correlated to a higher level of subclinical traits of these conditions in the general population. Here, we tested the feasibility of using a web-based version of a classic audio-visual simultaneity judgment (SJ) task with simple flash-beep stimuli in order to measure multisensory temporal acuity and its relationship with schizotypal traits as measured in the general population. Results show that: (i) the response distribution obtained in the web-based SJ task was strongly similar to those reported by studies carried out in controlled laboratory settings, and (ii) lower multisensory temporal acuity was associated with higher schizotypal traits in the “cognitive-perceptual” domains. Our findings reveal the possibility of adequately using a web-based audio-visual SJ task outside a controlled laboratory setting, available to a more diverse and representative pool of participants. These results provide additional evidence for a close relationship between lower multisensory acuity and the expression of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Gianluca Marsicano
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Filippo Cerpelloni
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy.,Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, KU Leuve, Leuven, Belgium.,Institute of Research in Psychology (IPSY) & Institute of Neuroscience (IoNS)-University of Louvain (UCLouvain), Leuven, Belgium
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy. .,Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
23
|
Rhythm and Reaching: The Influence of Rhythmic Auditory Cueing in a Goal-Directed Reaching Task With Adults Diagnosed With Cerebral Palsy. Adapt Phys Activ Q 2022; 39:1-16. [PMID: 34740992 DOI: 10.1123/apaq.2021-0019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 06/21/2021] [Accepted: 06/22/2021] [Indexed: 11/18/2022] Open
Abstract
Improvements in functional reaching directly support improvements in independence. The addition of auditory inputs (e.g., music, rhythmic counting) may improve goal-directed reaching for individuals with cerebral palsy (CP). To effectively integrate auditory stimuli into adapted teaching and rehabilitation protocols, it is necessary to understand how auditory stimuli may enhance limb control. This study considered the influence of auditory stimuli during the planning or execution phases of goal-directed reaches. Adults (with CP = 10, without CP = 10) reached from a home switch to two targets. Three conditions were presented-no sound, sound before, and sound during-and three-dimensional movement trajectories were recorded. Reaction times were shorter for both groups in the sound before condition, while the group with CP also reached peak velocity relatively earlier in the sound before condition. The group with CP executed more consistent movements in both sound conditions. Sound presented before movement initiation improved both the planning and execution of reaching movements for adults with CP.
Collapse
|
24
|
Kvamme TL, Sarmanlu M, Bailey C, Overgaard M. Neurofeedback Modulation of the Sound-induced Flash Illusion Using Parietal Cortex Alpha Oscillations Reveals Dependency on Prior Multisensory Congruency. Neuroscience 2021; 482:1-17. [PMID: 34838934 DOI: 10.1016/j.neuroscience.2021.11.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 11/12/2021] [Accepted: 11/19/2021] [Indexed: 01/27/2023]
Abstract
Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.
Collapse
Affiliation(s)
- Timo L Kvamme
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark.
| | - Mesud Sarmanlu
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Christopher Bailey
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Morten Overgaard
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| |
Collapse
|
25
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
26
|
The impact of joint attention on the sound-induced flash illusions. Atten Percept Psychophys 2021; 83:3056-3068. [PMID: 34561815 PMCID: PMC8550716 DOI: 10.3758/s13414-021-02347-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2021] [Indexed: 11/20/2022]
Abstract
Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration.
Collapse
|
27
|
Long-term training reduces the responses to the sound-induced flash illusion. Atten Percept Psychophys 2021; 84:529-539. [PMID: 34518970 DOI: 10.3758/s13414-021-02363-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/08/2022]
Abstract
The sound-induced flash illusion (SiFI) is a robust auditory-dominated multisensory integration phenomenon that is used as a reliable indicator to assess multisensory integration. Previous studies have indicated that the SiFI effect is correlated with perceptual sensitivity. However, to date, there is no consensus regarding how it corresponds to sensitivity with long-term training. The present study adopted the classic SiFI paradigm with feedback training to investigate the effect of a week of long-term training on the SiFI effect. Both the training group and control group completed a pretest and a posttest before and after the perceptual training; however, only the training group was required to complete 7-day behavioral training. The results showed that (1) long-term training could reduce the response of fission and fusion illusions by improving perceptual sensitivity and that (2) there was a "plateau effect" that emerged during the training stage, which tended to stabilize by the fifth day. These findings demonstrated that the SiFI effect could be modified with long-term training by ameliorating perceptual sensitivity, especially in terms of the fission illusion. Therefore, the present study supplements perceptual training in SiFI domains and provides evidence that the SiFI could be used as an assessment intervention to improve the efficiency of multisensory integration.
Collapse
|
28
|
Wang A, Zhou H, Yu W, Zhang F, Sang H, Tang X, Zhang T, Zhang M. Repetition Suppression in Visual and Auditory Modalities Affects the Sound-Induced Flash Illusion. Perception 2021; 50:489-507. [PMID: 34034565 DOI: 10.1177/03010066211018614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sound-induced flash illusion (SiFI) refers to the illusion that the number of visual flashes is equal to the number of auditory sounds when the visual flashes are accompanied by an unequal number of auditory sounds presented within 100 ms. The effect of repetition suppression (RS), an adaptive effect caused by stimulus repetition, upon the SiFI has not been investigated. Based on the classic SiFI paradigm, the present study investigated whether RS would affect the SiFI differently by adding preceding stimuli in visual and auditory modalities prior to the appearance of audiovisual stimuli. The results showed the auditory RS effect on the SiFI varied with the number of preceding auditory stimuli. The hit rate was higher with two preceding auditory stimuli than one preceding auditory stimulus in fission illusion, but it did not affect the size of the fusion illusion. However, the visual RS had no effect on the size of the fission and fusion illusions. The present study suggested that RS could affect the SiFI, indicating that the RS effect in different modalities would differentially affect the magnitude of the SiFI. In the process of multisensory integration, the visual and auditory modalities had asymmetrical RS effects.
Collapse
Affiliation(s)
| | | | - Wei Yu
- Changchun University of Chinese Medicine, China
| | | | | | | | | | | |
Collapse
|
29
|
Jagini KK. Temporal Binding in Multisensory and Motor-Sensory Contexts: Toward a Unified Model. Front Hum Neurosci 2021; 15:629437. [PMID: 33841117 PMCID: PMC8026855 DOI: 10.3389/fnhum.2021.629437] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Our senses receive a manifold of sensory signals at any given moment in our daily lives. For a coherent and unified representation of information and precise motor control, our brain needs to temporally bind the signals emanating from a common causal event and segregate others. Traditionally, different mechanisms were proposed for the temporal binding phenomenon in multisensory and motor-sensory contexts. This paper reviews the literature on the temporal binding phenomenon in both multisensory and motor-sensory contexts and suggests future research directions for advancing the field. Moreover, by critically evaluating the recent literature, this paper suggests that common computational principles are responsible for the temporal binding in multisensory and motor-sensory contexts. These computational principles are grounded in the Bayesian framework of uncertainty reduction rooted in the Helmholtzian idea of unconscious causal inference.
Collapse
Affiliation(s)
- Kishore Kumar Jagini
- Center for Cognitive and Brain Sciences, Indian Institute of Technology Gandhinagar, Gandhinagar, India
| |
Collapse
|
30
|
The development of visuotactile congruency effects for sequences of events. J Exp Child Psychol 2021; 207:105094. [PMID: 33714049 DOI: 10.1016/j.jecp.2021.105094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/11/2020] [Accepted: 01/07/2021] [Indexed: 11/23/2022]
Abstract
Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5-7 years (n = 19), 8 and 9 years (n = 21), and 10-12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5-7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing.
Collapse
|
31
|
Audio-visual integration in noise: Influence of auditory and visual stimulus degradation on eye movements and perception of the McGurk effect. Atten Percept Psychophys 2020; 82:3544-3557. [PMID: 32533526 PMCID: PMC7788022 DOI: 10.3758/s13414-020-02042-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Seeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.
Collapse
|
32
|
Exposure to first-person shooter videogames is associated with multisensory temporal precision and migraine incidence. Cortex 2020; 134:223-238. [PMID: 33291047 DOI: 10.1016/j.cortex.2020.10.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/03/2020] [Accepted: 10/19/2020] [Indexed: 02/06/2023]
Abstract
Adaptive interactions with the environment require optimal integration and segregation of sensory information. Yet, temporal misalignments in the presentation of visual and auditory stimuli may generate illusory phenomena such as the sound-induced flash illusion, in which a single flash paired with multiple auditory stimuli induces the perception of multiple illusory flashes. This phenomenon has been shown to be robust and resistant to feedback training. According to a Bayesian account, this is due to a statistically optimal combination of the signals operated by the nervous system. From this perspective, individual susceptibility to the illusion might be moulded through prolonged experience. For example, repeated exposure to the illusion and prolonged training sessions partially impact on the reported illusion. Therefore, extensive and immersive audio-visual experience, such as first-person shooter videogames, should sharpen individual capacity to correctly integrate multisensory information over time, leading to more veridical perception. We tested this hypothesis by comparing the temporal profile of the sound-induced illusion in a group of expert first-person shooter gamers and a non-players group. In line with the hypotheses, gamers experience significantly narrower windows of illusion (~87 ms) relative to non-players (~105 ms), leading to higher veridical reports in gamers (~68%) relative to non-players (~59%). Moreover, according to recent literature, we tested whether audio-visual intensive training in gamers could be related to the incidence of migraine, and found that its severity may be directly proportioned to the time spent on videogames. Overall, these results suggest that continued training within audio-visual environments such as first-person shooter videogames improves temporal discrimination and sensory integration. This finding may pave the way for future therapeutic strategies based on self-administered multisensory training. On the other hand, the impact of intensive training on visual-related stress disorders, such as migraine incidence, should be taken into account as a risk factor during therapeutic planning.
Collapse
|
33
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
34
|
Ręk P, Magrath RD. Visual displays enhance vocal duet production and the perception of coordination despite spatial separation of partners. Anim Behav 2020. [DOI: 10.1016/j.anbehav.2020.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
35
|
Asaoka R, Takeshima Y. Incongruent Audiovisual Inducer Information and Fission/Fusion Illusions. Percept Mot Skills 2020; 128:59-79. [PMID: 32990163 DOI: 10.1177/0031512520960989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In research studies on how people perceive simultaneously presented audiovisual information, researchers have often shown that the number of visual flashes participants perceive on a computer screen can be altered by varying the number of accompanying auditory, visual, or combined audiovisual cues or inducers. In the present study, we examined the effects of number-incongruent audiovisual inducer stimuli on the participants' perceived number of target flashes. We instructed 16 participants (eight males and eight females; Mage = 21.56; SDage = 1.93) to report their perceived number of target flashes while ignoring the visual and auditory inducers. Across 18 different experimental conditions, we presented one or two target flashes in association with varied numbers (0, 1, 2) of auditory and visual inducer stimuli. In the condition with one target flash paired with one visual and two auditory inducers, the number of visual inducers (i.e., one) had a greater influence on the number of perceived target flashes than did the number of auditory inducers (i.e., two). Under all other number incongruent audiovisual inducer conditions, the participants' perceived number of target flashes was influenced more by the number of auditory than the number of visual inducers. We discuss these findings in the context of perceptual grouping and perceptual temporal uncertainty.
Collapse
Affiliation(s)
- Riku Asaoka
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | | |
Collapse
|
36
|
Wahn B, Rohe T, Gearhart A, Kingstone A, Sinnett S. Performing a task jointly enhances the sound-induced flash illusion. Q J Exp Psychol (Hove) 2020; 73:2260-2271. [PMID: 32698727 DOI: 10.1177/1747021820942687] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Our senses are stimulated continuously. Through multisensory integration, different sensory inputs may or may not be combined into a unitary percept. Simultaneous with this stimulation, people are frequently engaged in social interactions, but how multisensory integration and social processing interact is largely unknown. The present study investigated if, and how, the multisensory sound-induced flash illusion is affected by a social manipulation. In the sound-induced flash illusion, a participant typically receives one visual flash and two auditory beeps and she or he is required to indicate the number of flashes that were perceived. Often, the auditory beeps alter the perception of the flashes such that a participant tends to perceive two flashes instead of one flash. We tested whether performing a flash counting task with a partner (confederate), who was required to indicate the number of presented beeps, would modulate this illusion. We found that the sound-induced flash illusion was perceived significantly more often when the flash counting task was performed with the confederate compared with performing it alone. Yet, we no longer find this effect if visual access between the two individuals is prevented. These findings, combined with previous results, suggest that performing a multisensory task jointly-in this case an audiovisual task-lowers the extent to which an individual attends to visual information, which in turn affects the multisensory integration process.
Collapse
Affiliation(s)
- Basil Wahn
- Department of Psychology, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Anika Gearhart
- Department of Psychology, University of Hawai'i at Mānoa, Honolulu, HI, USA
| | - Alan Kingstone
- Department of Psychology, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Scott Sinnett
- Department of Psychology, University of Hawai'i at Mānoa, Honolulu, HI, USA
| |
Collapse
|
37
|
Deploying attention to the target location of a pointing action modulates audiovisual processes at nontarget locations. Atten Percept Psychophys 2020; 82:3507-3520. [PMID: 32676805 DOI: 10.3758/s13414-020-02065-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study examined how the deployment of spatial attention at the onset of a pointing movement influenced audiovisual crossmodal interactions at the target of the pointing action and at nontarget locations. These interactions were quantified by measuring the susceptibility to the fission (i.e., reporting two visual flashes under one flash and two auditory beep pairings) and fusion (i.e., reporting one flash under two flashes and one beep pairing) audiovisual illusions. At movement onset, unimodal, or auditory and visual bimodal stimuli were either presented at the target of the pointing action or in an adjacent, nontarget location. In Experiment 1, perceptual accuracy within the unimodal and bimodal conditions was lower in the nontarget relative to the target condition. The fission illusion was uninfluenced by target condition. However, the fusion illusion was more likely to be reported at the target relative to the nontarget location. In Experiment 2, the stimuli from Experiment 1 were further presented at a location near where the eyes were fixated (i.e., congruent condition), where the hand was aiming (i.e., target), or in a location where neither the eyes were fixated nor the hand was aiming. The results yielded the greatest susceptibility to the fusion illusion when the visual location and movement end points were congruent relative to when either movement or fixation was incongruent. Although attention may facilitate the processing of unisensory and multisensory cues in general, attention might have the strongest influence on the audiovisual integration mechanisms that underlie the sound-induced fusion illusion.
Collapse
|
38
|
Boyce WP, Lindsay A, Zgonnikov A, Rañó I, Wong-Lin K. Optimality and Limitations of Audio-Visual Integration for Cognitive Systems. Front Robot AI 2020; 7:94. [PMID: 33501261 PMCID: PMC7805627 DOI: 10.3389/frobt.2020.00094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 06/09/2020] [Indexed: 11/13/2022] Open
Abstract
Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimizes the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artifacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artifacts. Finally, we suggest avenues of research toward solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.
Collapse
Affiliation(s)
- William Paul Boyce
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Anthony Lindsay
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Arkady Zgonnikov
- AiTech, Delft University of Technology, Delft, Netherlands
- Department of Cognitive Robotics, Faculty of Mechanical, Maritime, and Materials Engineering, Delft University of Technology, Delft, Netherlands
| | - Iñaki Rañó
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| |
Collapse
|
39
|
Sound-induced flash illusion in elderly adults: Evidence from low-frequency fluctuation amplitudes in resting-state fMRI. ACTA PSYCHOLOGICA SINICA 2020. [DOI: 10.3724/sp.j.1041.2020.00823] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
41
|
Moradi V, Kheirkhah K, Farahani S, Kavianpour I. Investigating the Effects of Hearing Loss and Hearing Aid Digital Delay on Sound-Induced Flash Illusion. J Audiol Otol 2020; 24:174-179. [PMID: 32575953 PMCID: PMC7575923 DOI: 10.7874/jao.2019.00507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 04/28/2020] [Indexed: 11/29/2022] Open
Abstract
Background and Objectives The integration of auditory-visual speech information improves speech perception; however, if the auditory system input is disrupted due to hearing loss, auditory and visual inputs cannot be fully integrated. Additionally, temporal coincidence of auditory and visual input is a significantly important factor in integrating the input of these two senses. Time delayed acoustic pathway caused by the signal passing through digital signal processing. Therefore, this study aimed to investigate the effects of hearing loss and hearing aid digital delay circuit on sound-induced flash illusion. Subjects and Methods A total of 13 adults with normal hearing, 13 with mild to moderate hearing loss, and 13 with moderate to severe hearing loss were enrolled in this study. Subsequently, the sound-induced flash illusion test was conducted, and the results were analyzed. Results The results showed that hearing aid digital delay and hearing loss had no detrimental effect on sound-induced flash illusion. Conclusions Transmission velocity and neural transduction rate of the auditory inputs decreased in patients with hearing loss. Hence, the integrating auditory and visual sensory cannot be combined completely. Although the transmission rate of the auditory sense input was approximately normal when the hearing aid was prescribed. Thus, it can be concluded that the processing delay in the hearing aid circuit is insufficient to disrupt the integration of auditory and visual information.
Collapse
Affiliation(s)
- Vahid Moradi
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Kiana Kheirkhah
- Department of Biomedical Engineering, School of Electrical and Computer, Islamic Azad University, Tehran, Iran
| | - Saeid Farahani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Iman Kavianpour
- Department of Telecommunication, School of Engineering Boushehr Branch, Islamic Azad University, Boushehr, Iran
| |
Collapse
|
42
|
Oess T, Löhr MPR, Schmid D, Ernst MO, Neumann H. From Near-Optimal Bayesian Integration to Neuromorphic Hardware: A Neural Network Model of Multisensory Integration. Front Neurorobot 2020; 14:29. [PMID: 32499692 PMCID: PMC7243343 DOI: 10.3389/fnbot.2020.00029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 04/22/2020] [Indexed: 11/18/2022] Open
Abstract
While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the same object. To cope with this problem, humans and most other animal species are equipped with complex neural circuits to enable fast and reliable combination of signals from various sensory organs. This multisensory integration starts in the brain stem to facilitate unconscious reflexes and continues on ascending pathways to cortical areas for further processing. To investigate the underlying mechanisms in detail, we developed a canonical neural network model for multisensory integration that resembles neurophysiological findings. For example, the model comprises multisensory integration neurons that receive excitatory and inhibitory inputs from unimodal auditory and visual neurons, respectively, as well as feedback from cortex. Such feedback projections facilitate multisensory response enhancement and lead to the commonly observed inverse effectiveness of neural activity in multisensory neurons. Two versions of the model are implemented, a rate-based neural network model for qualitative analysis and a variant that employs spiking neurons for deployment on a neuromorphic processing. This dual approach allows to create an evaluation environment with the ability to test model performances with real world inputs. As a platform for deployment we chose IBM's neurosynaptic chip TrueNorth. Behavioral studies in humans indicate that temporal and spatial offsets as well as reliability of stimuli are critical parameters for integrating signals from different modalities. The model reproduces such behavior in experiments with different sets of stimuli. In particular, model performance for stimuli with varying spatial offset is tested. In addition, we demonstrate that due to the emergent properties of network dynamics model performance is close to optimal Bayesian inference for integration of multimodal sensory signals. Furthermore, the implementation of the model on a neuromorphic processing chip enables a complete neuromorphic processing cascade from sensory perception to multisensory integration and the evaluation of model performance for real world inputs.
Collapse
Affiliation(s)
- Timo Oess
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Maximilian P R Löhr
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Daniel Schmid
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Heiko Neumann
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
43
|
Keil J. Double Flash Illusions: Current Findings and Future Directions. Front Neurosci 2020; 14:298. [PMID: 32317920 PMCID: PMC7146460 DOI: 10.3389/fnins.2020.00298] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 03/16/2020] [Indexed: 11/29/2022] Open
Abstract
Twenty years ago, the first report on the sound-induced double flash illusion, a visual illusion induced by sound, was published. In this paradigm, participants are presented with different numbers of auditory and visual stimuli. In case of an incongruent number of auditory and visual stimuli, the influence of auditory information on visual perception can lead to the perception of the illusion. Thus, combining two auditory stimuli with one visual stimulus can induce the perception of two visual stimuli, the so-called fission illusion. Alternatively, combining one auditory stimulus with two visual stimuli can induce the perception of one visual stimulus, the so-called fusion illusion. Overall, current research shows that the illusion is a reliable indicator of multisensory integration. It has also been replicated using different stimulus combinations, such as visual and tactile stimuli. Importantly, the robustness of the illusion allows the widespread use for assessing multisensory integration across different groups of healthy participants and clinical populations and in various task setting. This review will give an overview of the experimental evidence supporting the illusion, the current state of research concerning the influence of cognitive processes on the illusion, the neural mechanisms underlying the illusion, and future research directions. Moreover, an exemplary experimental setup will be described with different options to examine perception, alongside code to test and replicate the illusion online or in the laboratory.
Collapse
Affiliation(s)
- Julian Keil
- Biological Psychology, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
| |
Collapse
|
44
|
O' Dowd A, Sorgini F, Newell FN. Seeing an image of the hand affects performance on a crossmodal congruency task for sequences of events. Conscious Cogn 2020; 80:102900. [DOI: 10.1016/j.concog.2020.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 01/25/2020] [Accepted: 02/12/2020] [Indexed: 10/24/2022]
|
45
|
Welsh TN, Reid C, Manson G, Constable MD, Tremblay L. Susceptibility to the fusion illusion is modulated during both action execution and action observation. Acta Psychol (Amst) 2020; 204:103028. [PMID: 32062166 DOI: 10.1016/j.actpsy.2020.103028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 02/04/2020] [Accepted: 02/05/2020] [Indexed: 11/17/2022] Open
Abstract
Many researchers have proposed that when an individual observes the actions of another individual, the observer simulates the action using many of the same neural areas that are involved in action production. The present study was designed to test this simulation hypothesis by comparing the perception of multisensory stimuli during both the execution and observation of an aiming action. The present work used the fusion illusion - an audio-visual illusion in which two visual stimuli presented with one auditory stimulus are erroneously perceived as being one visual stimulus. Previous research has shown that, during action execution, susceptibly to this illusion is reduced early in the execution of the movement when visual information may be more highly weighted than other sensory information. We sought to determine whether or not a non-acting observer of an action showed a similar reduction in susceptibility to the fusion illusion. Participants fixated a target and either executed or observed a manual aiming movement to that target. Audiovisual stimuli were presented at 0, 100, or 200 ms relative to movement onset and participants reported the number of perceived flashes after the movement was completed. Analysis of perceived flashes revealed that participants were less susceptible to the fusion illusion when the stimuli were presented early (100 ms) relative to later in the movement (200 ms). Critically, this pattern emerged in both execution and observation tasks. These findings support the hypothesis that observers simulate the performance of the actor and experience comparable real-time alterations in multisensory processing.
Collapse
Affiliation(s)
- Timothy N Welsh
- Faculty of Kinesiology & Physical Education, Centre for Motor Control, University of Toronto, Canada.
| | - Connor Reid
- Faculty of Kinesiology & Physical Education, Centre for Motor Control, University of Toronto, Canada
| | - Gerome Manson
- Department of Neurosurgery, Houston Methodist Research Institute
| | | | - Luc Tremblay
- Faculty of Kinesiology & Physical Education, Centre for Motor Control, University of Toronto, Canada
| |
Collapse
|
46
|
Sun Y, Liu X, Li B, Sava-Segal C, Wang A, Zhang M. Effects of Repetition Suppression on Sound Induced Flash Illusion With Aging. Front Psychol 2020; 11:216. [PMID: 32153456 PMCID: PMC7047336 DOI: 10.3389/fpsyg.2020.00216] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 01/30/2020] [Indexed: 11/13/2022] Open
Abstract
The sound-induced flash illusion (SiFI) is a classical auditory-dominated multisensory integration phenomenon in which the observer misperceives the number of visual flashes due to the simultaneous presentation of a different number of auditory beeps. Although the SiFI has been documented to correlate with perceptual sensitivity, to date there is no consensus as to how it corresponds to sensitivity with aging. The present study was based on the SiFI paradigm (Shams et al., 2000), adding repeated auditory stimuli prior to the appearance of audiovisual stimuli to investigate the effects of repetition suppression (RS) on the SiFI with aging. The repeated auditory stimuli consisted of one or two of the same auditory stimuli presented twice in succession, which were then followed by the audiovisual stimuli. By comparing the illusions in old and young adults, we aimed to explore the influence of aging on the RS of auditory stimuli on the SiFI. The results showed that both age groups showed SiFI effects, however, the RS performance of the two age groups had different effects on the fusion and fission illusions. The illusion effect in old adults was weaker than in young adults. Specifically, RS only affected fission illusions in the old adults but both fission and fusion illusions in young adults. Thus, the present study indicated that the decreased perceptual sensitivity based on auditory RS could weaken the SiFI effect in multisensory integration and that old adults are more susceptible to RS, showing that old adults perceived the SiFI effect weakly under auditory RS.
Collapse
Affiliation(s)
- Yawen Sun
- Department of Psychology, Soochow University, Suzhou, China
| | - Xiaole Liu
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Biqin Li
- Laboratory of Psychology and Cognition Science, School of Psychology, Jiangxi Normal University, Nanchang, China
| | - Clara Sava-Segal
- Department of Neurology & Neurological Sciences, Stanford University, Palo Alto, CA, United States
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
47
|
Takeshima Y. Emotional information affects fission illusion induced by audio-visual interactions. Sci Rep 2020; 10:998. [PMID: 31969585 PMCID: PMC6976667 DOI: 10.1038/s41598-020-57719-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 01/06/2020] [Indexed: 11/12/2022] Open
Abstract
Multisensory integration is affected by various types of information coming from different sensory stimuli. It has been suggested that emotional information also influences the multisensory integration process. The perceptual phenomena induced by audio-visual integration are modulated by emotional signals through changing individuals' emotional states. However, the direct effects of emotional information, without changing emotional states on the multisensory integration process have not yet been examined. The present study investigated the effects of an emotional signal on audio-visual integration. The experiments compared the magnitude of audio-visual fission and fusion illusions using facial expression stimuli and simple geometric shapes. Facial expression stimuli altered the criterion difference for discerning the number of flashes when two beeps were simultaneously presented in Experiment 1. These stimuli did not affect the fission illusion's magnitude. For simple geometric shapes, emotional shapes perceptually induced a larger fission illusion in Experiment 2. The present study found that the emotional valence included in simple geometric shapes induced a larger fission illusion. Moreover, current results suggest that emotional faces modulate response criterion for fission illusion in discernment of the number of flashes. Future studies should elucidate in detail the mechanism of emotional valence effects on audio-visual integration.
Collapse
|
48
|
Maccora S, Bolognini N, Cosentino G, Baschi R, Vallar G, Fierro B, Brighina F. Multisensorial Perception in Chronic Migraine and the Role of Medication Overuse. THE JOURNAL OF PAIN 2020; 21:919-929. [PMID: 31904501 DOI: 10.1016/j.jpain.2019.12.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 11/12/2019] [Accepted: 12/04/2019] [Indexed: 01/03/2023]
Abstract
Multisensory processing can be assessed by measuring susceptibility to crossmodal illusions such as the Sound-Induced Flash Illusion (SIFI). When a single flash is accompanied by 2 or more beeps, it is perceived as multiple flashes (fission illusion); conversely, a fusion illusion is experienced when more flashes are matched with a single beep, leading to the perception of a single flash. Such illusory perceptions are associated to crossmodal changes in visual cortical excitability. Indeed, increasing occipital cortical excitability, by means of transcranial electrical currents, disrupts the SIFI (ie, fission illusion). Similarly, a reduced fission illusion was shown in patients with episodic migraine, especially during the attack, in agreement with the pathophysiological model of cortical hyperexcitability of this disease. If episodic migraine patients present with reduced SIFI especially during the attack, we hypothesize that chronic migraine (CM) patients should consistently report less illusory effects than healthy controls; drugs intake could also affect SIFI. On such a basis, we studied the proneness to SIFI in CM patients (n = 63), including 52 patients with Medication Overuse Headache (MOH), compared to 24 healthy controls. All migraine patients showed reduced fission phenomena than controls (P < .0001). Triptan MOH patients (n = 23) presented significantly less fission effects than other CM groups (P = .008). This exploratory study suggests that CM - both with and without medication overuse - is associated to a higher visual cortical responsiveness which causes deficit of multisensorial processing, as assessed by the SIFI. PERSPECTIVE: This observational study shows reduced susceptibility to the SIFI in CM, confirming and extending previous results in episodic migraine. MOH contributes to this phenomenon, especially in case of triptans.
Collapse
Affiliation(s)
- Simona Maccora
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Nadia Bolognini
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Giuseppe Cosentino
- Department of Brain and Behavioural Sciences, University of Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Roberta Baschi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Giuseppe Vallar
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Brigida Fierro
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Filippo Brighina
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy.
| |
Collapse
|
49
|
Hirst RJ, Setti A, Kenny RA, Newell FN. Age-related sensory decline mediates the Sound-Induced Flash Illusion: Evidence for reliability weighting models of multisensory perception. Sci Rep 2019; 9:19347. [PMID: 31852954 PMCID: PMC6920348 DOI: 10.1038/s41598-019-55901-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Accepted: 12/03/2019] [Indexed: 12/05/2022] Open
Abstract
Perception of our world is proposed to arise from combining multiple sensory inputs according to their relative reliability. We tested multisensory processes in a large sample of 2920 older adults to assess whether sensory ability mediates age-related changes in perception. Participants completed a test of audio-visual integration, the Sound Induced Flash Illusion (SIFI), alongside measures of visual (acuity, contrast sensitivity, self-reported vision and visual temporal discrimination (VTD)) and auditory (self-reported hearing and auditory temporal discrimination (ATD)) function. Structural equation modelling showed that SIFI susceptibility increased with age. This was mediated by visual acuity and self-reported hearing: better scores on these measures predicted reduced and stronger SIFI susceptibility, respectively. Unexpectedly, VTD improved with age and predicted increased SIFI susceptibility. Importantly, the relationship between age and SIFI susceptibility remained significant, even when considering mediators. A second model showed that, with age, visual 'gain' (the benefit of congruent auditory information on visual judgements) was predicted by ATD: better ATD predicted stronger visual gain. However, neither age nor SIFI susceptibility were directly associated with visual gain. Our findings illustrate, in the largest sample of older adults to date, how multisensory perception is influenced, but not fully accounted for, by age-related changes in unisensory abilities.
Collapse
Affiliation(s)
- Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- School of Applied Psychology, University College Cork, Dublin, Ireland
| | - Rose A Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer's Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
50
|
Ito Y, Sato R, Tamai Y, Hiryu S, Uekita T, Kobayasi KI. Auditory-induced visual illusions in rodents measured by spontaneous behavioural response. Sci Rep 2019; 9:19211. [PMID: 31844094 PMCID: PMC6914771 DOI: 10.1038/s41598-019-55664-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 12/02/2019] [Indexed: 01/14/2023] Open
Abstract
When two brief sounds are presented with a short flash of light, we often perceive that the flash blinks twice. This phenomenon, called the “sound-induced flash illusion”, has been investigated as an example of how humans finely integrate multisensory information, more specifically, the temporal content of perception. However, it is unclear whether nonhuman animals experience the illusion. Therefore, we investigated whether the Mongolian gerbil, a rodent with relatively good eyesight, experiences this illusion. The novel object recognition (NOR) paradigm was used to evaluate the gerbil’s natural (i.e., untrained) capacity for multimodal integration. A light-emitting diode embedded within an object presented time-varying visual stimuli (different flashing patterns). The animals were first familiarised with repetitive single flashes. Then, various sound stimuli were introduced during test trials. An increase in exploration suggested that the animals perceived a flashing pattern differently only when the contradicting sound (double beeps) was presented simultaneously with a single flash. This result shows that the gerbil may experience the sound-induced flash illusion and indicates for the first time that rodents may have the capacity to integrate temporal content of perception in a sophisticated manner as do humans.
Collapse
Affiliation(s)
- Yuki Ito
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394, Japan
| | - Ryo Sato
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394, Japan
| | - Yuta Tamai
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394, Japan
| | - Shizuko Hiryu
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394, Japan
| | - Tomoko Uekita
- Department of Psychology, Kyoto Tachibana University, 34 Yamada-cho, Oyake, Yamashina-ku, Kyoto, 607-8175, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe, 610-0394, Japan.
| |
Collapse
|