51
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
52
|
Ren Y, Zhang Y, Hou Y, Li J, Bi J, Yang W. Exogenous Bimodal Cues Attenuate Age-Related Audiovisual Integration. Iperception 2021; 12:20416695211020768. [PMID: 34104386 PMCID: PMC8165524 DOI: 10.1177/20416695211020768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/17/2022] Open
Abstract
Previous studies have demonstrated that exogenous attention decreases audiovisual integration (AVI); however, whether the AVI is different when exogenous attention is elicited by bimodal and unimodal cues and its aging effect remain unclear. To clarify this matter, 20 older adults and 20 younger adults were recruited to conduct an auditory/visual discrimination task following bimodal audiovisual cues or unimodal auditory/visual cues. The results showed that the response to all stimulus types was faster in younger adults compared with older adults, and the response was faster when responding to audiovisual stimuli compared with auditory or visual stimuli. Analysis using the race model revealed that the AVI was lower in the exogenous-cue conditions compared with the no-cue condition for both older and younger adults. The AVI was observed in all exogenous-cue conditions for the younger adults (visual cue > auditory cue > audiovisual cue); however, for older adults, the AVI was only found in the visual-cue condition. In addition, the AVI was lower in older adults compared to younger adults under no- and visual-cue conditions. These results suggested that exogenous attention decreased the AVI, and the AVI was lower in exogenous attention elicited by bimodal-cue than by unimodal-cue conditions. In addition, the AVI was reduced for older adults compared with younger adults under exogenous attention.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yawei Hou
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junyuan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junhao Bi
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
53
|
Nazaré CJ, Oliveira AM. Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms? Multisens Res 2021; 34:1-35. [PMID: 33882452 DOI: 10.1163/22134808-bja10048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022]
Abstract
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
Collapse
Affiliation(s)
- Cristina Jordão Nazaré
- Instituto Politécnico de Coimbra, ESTESC - Coimbra Health School, Audiologia, Coimbra, Portugal
| | | |
Collapse
|
54
|
Wang Z, Chen M, Goerlich KS, Aleman A, Xu P, Luo Y. Deficient auditory emotion processing but intact emotional multisensory integration in alexithymia. Psychophysiology 2021; 58:e13806. [PMID: 33742708 PMCID: PMC9285530 DOI: 10.1111/psyp.13806] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 01/29/2021] [Accepted: 02/24/2021] [Indexed: 11/29/2022]
Abstract
Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop‐like audiovisual task while recording event‐related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration‐related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration. Our behavioral and electrophysiological data provide substantial evidence for intact emotion multisensory integration in relation to alexithymia. With high ecological validity, these findings are of particular importance given that humans are constantly exposed to competing, complex audiovisual emotional information in social interaction contexts. Our work has important implications for the psychophysiology of alexithymia and emotional processing.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mai Chen
- School of Psychology, Shenzhen University, Shenzhen, China
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China.,Guangdong-Hong Kong-Macao Greater Bay Area Research Institute for Neuroscience and Neurotechnologies, Kwun Tong, Hong Kong, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Department of Psychology, Southern Medical University, Guangzhou, China.,The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China
| |
Collapse
|
55
|
Porada DK, Regenbogen C, Freiherr J, Seubert J, Lundström JN. Trimodal processing of complex stimuli in inferior parietal cortex is modality-independent. Cortex 2021; 139:198-210. [PMID: 33878687 DOI: 10.1016/j.cortex.2021.03.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 11/29/2020] [Accepted: 03/09/2021] [Indexed: 11/26/2022]
Abstract
In humans, multisensory mechanisms facilitate object processing through integration of sensory signals that match in their temporal and spatial occurrence as well as their meaning. The generalizability of such integration processes across different sensory modalities is, however, to date not well understood. As such, it remains unknown whether there are cerebral areas that process object-related signals independently of the specific senses from which they arise, and whether these areas show different response profiles depending on the number of sensory channels that carry information. To address these questions, we presented participants with dynamic stimuli that simultaneously emitted object-related sensory information via one, two, or three channels (sight, sound, smell) in the MR scanner. By comparing neural activation patterns between various integration processes differing in type and number of stimulated senses, we showed that the left inferior frontal gyrus and areas within the left inferior parietal cortex were engaged independently of the number and type of sensory input streams. Activation in these areas was enhanced during bimodal stimulation, compared to the sum of unimodal activations, and increased even further during trimodal stimulation. Taken together, our findings demonstrate that activation of the inferior parietal cortex during processing and integration of meaningful multisensory stimuli is both modality-independent and modulated by the number of available sensory modalities. This suggests that the processing demand placed on the parietal cortex increases with the number of sensory input streams carrying meaningful information, likely due to the increasing complexity of such stimuli.
Collapse
Affiliation(s)
- Danja K Porada
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Christina Regenbogen
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA Institute Brain Structure Function Relationship, RWTH Aachen University, Aachen, Germany
| | - Jessica Freiherr
- Department of Psychiatry and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Janina Seubert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Monell Chemical Senses Center, Philadelphia, USA; Department of Psychology, University of Pennsylvania, Philadelphia, USA; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
56
|
McCall AA, Miller DM, Balaban CD. Integration of vestibular and hindlimb inputs by vestibular nucleus neurons: multisensory influences on postural control. J Neurophysiol 2021; 125:1095-1110. [PMID: 33534649 DOI: 10.1152/jn.00350.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We recently demonstrated in decerebrate and conscious cat preparations that hindlimb somatosensory inputs converge with vestibular afferent input onto neurons in multiple central nervous system (CNS) locations that participate in balance control. Although it is known that head position and limb state modulate postural reflexes, presumably through vestibulospinal and reticulospinal pathways, the combined influence of the two inputs on the activity of neurons in these brainstem regions is unknown. In the present study, we evaluated the responses of vestibular nucleus (VN) neurons to vestibular and hindlimb stimuli delivered separately and together in conscious cats. We hypothesized that VN neuronal firing during activation of vestibular and limb proprioceptive inputs would be well fit by an additive model. Extracellular single-unit recordings were obtained from VN neurons. Sinusoidal whole body rotation in the roll plane was used as the search stimulus. Units responding to the search stimulus were tested for their responses to 10° ramp-and-hold roll body rotation, 60° extension hindlimb movement, and both movements delivered simultaneously. Composite response histograms were fit by a model of low- and high-pass filtered limb and body position signals using least squares nonlinear regression. We found that VN neuronal activity during combined vestibular and hindlimb proprioceptive stimulation in the conscious cat is well fit by a simple additive model for signals with similar temporal dynamics. The mean R2 value for goodness of fit across all units was 0.74 ± 0.17. It is likely that VN neurons that exhibit these integrative properties participate in adjusting vestibulospinal outflow in response to limb state.NEW & NOTEWORTHY Vestibular nucleus neurons receive convergent information from hindlimb somatosensory inputs and vestibular inputs. In this study, extracellular single-unit recordings of vestibular nucleus neurons during conditions of passively applied limb movement, passive whole body rotations, and combined stimulation were well fit by an additive model. The integration of hindlimb somatosensory inputs with vestibular inputs at the first stage of vestibular processing suggests that vestibular nucleus neurons account for limb position in determining vestibulospinal responses to postural perturbations.
Collapse
Affiliation(s)
- Andrew A McCall
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Derek M Miller
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Carey D Balaban
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
57
|
Effects of stimulus intensity on audiovisual integration in aging across the temporal dynamics of processing. Int J Psychophysiol 2021; 162:95-103. [PMID: 33529642 DOI: 10.1016/j.ijpsycho.2021.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 10/26/2020] [Accepted: 01/24/2021] [Indexed: 11/24/2022]
Abstract
Previous studies have drawn different conclusions about whether older adults benefit more from audiovisual integration, and such conflicts may have been due to the stimulus features investigated in those studies, such as stimulus intensity. In the current study, using ERPs, we compared the effects of stimulus intensity on audiovisual integration between young adults and older adults. The results showed that inverse effectiveness, which depicts a phenomenon that lowing the effectiveness of sensory stimuli increases benefits of multisensory integration, was observed in young adults at earlier processing stages but was absent in older adults. Moreover, at the earlier processing stages (60-90 ms and 110-140 ms), older adults exhibited significantly greater audiovisual integration than young adults (all ps < 0.05). However, at the later processing stages (220-250 ms and 340-370 ms), young adults exhibited significantly greater audiovisual integration than old adults (all ps < 0.001). The results suggested that there is an age-related dissociation between early integration and late integration, which indicates that there are different audiovisual processing mechanisms in play between older adults and young adults.
Collapse
|
58
|
Merz S, Frings C, Spence C. When irrelevant information helps: Extending the Eriksen-flanker task into a multisensory world. Atten Percept Psychophys 2021; 83:776-789. [PMID: 32514664 PMCID: PMC7884353 DOI: 10.3758/s13414-020-02066-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Charles W. Eriksen dedicated much of his research career to the field of cognitive psychology, investigating human information processing in those situations that required selection between competing stimuli. Together with his wife Barbara, he introduced the flanker task, which became one of the standard experimental tasks used by researchers to investigate the mechanisms underpinning selection. Although Eriksen himself was primarily interested in investigating visual selection, the flanker task was eventually adapted by other researchers to investigate human information processing and selection in a variety of nonvisual and multisensory situations. Here, we discuss the core aspects of the flanker task and interpret the evidence of the flanker task when used in crossmodal and multisensory settings. "Selection" has been a core topic of psychology for nearly 120 years. Nowadays, though, it is clear that we need to look at selection from a multisensory perspective-the flanker task, at least in its crossmodal and multisensory variants, is an important tool with which to investigate selection, attention, and multisensory information processing.
Collapse
Affiliation(s)
- Simon Merz
- Department of Psychology, Cognitive Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany.
| | - Christian Frings
- Department of Psychology, Cognitive Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
59
|
Zhao S, Feng C, Liao Y, Huang X, Feng W. Attentional blink suppresses both stimulus-driven and representation-driven cross-modal spread of attention. Psychophysiology 2021; 58:e13761. [PMID: 33400294 DOI: 10.1111/psyp.13761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 11/05/2020] [Accepted: 12/14/2020] [Indexed: 11/30/2022]
Abstract
Previous studies have shown that visual attention effect can spread to the task-irrelevant auditory modality automatically through either the stimulus-driven binding process or the representation-driven priming process. Using an attentional blink paradigm, the present study investigated whether the long-latency stimulus-driven and representation-driven cross-modal spread of attention would be inhibited or facilitated when the attentional resources operating at the post-perceptual stage of processing are inadequate, whereas ensuring all visual stimuli were spatially attended and the representations of visual target object categories were activated, which were previously thought to be the only endogenous prerequisites for triggering cross-modal spread of attention. The results demonstrated that both types of attentional spreading were completely suppressed during the attentional blink interval but were highly prominent outside the attentional blink interval, with the stimulus-driven process being independent of, whereas the representation-driven process being dependent on, audiovisual semantic congruency. These findings provide the first evidence that the occurrences of both stimulus-driven and representation-driven spread of attention are contingent on the amount of post-perceptual attentional resources responsible for the late consolidation processing of visual stimuli, whereas the early detection of visual stimuli and the top-down activation of the visual representations are not the sole endogenous prerequisites for triggering any types of cross-modal attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Yu Liao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| |
Collapse
|
60
|
Simões EN, Carvalho ALN, Schmidt SL. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD. J Atten Disord 2021; 25:53-62. [PMID: 29671360 DOI: 10.1177/1087054718769149] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Objective: Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. Method: The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). Results: The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Conclusion: Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.
Collapse
|
61
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
62
|
Lee S, McDonough IM, Mendoza JS, Brasfield MB, Enam T, Reynolds C, Pody BC. Cellphone addiction explains how cellphones impair learning for lecture materials. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3745] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Seungyeon Lee
- School of Social & Behavioral Sciences University of Arkansas at Monticello Monticello Arkansas 71656
| | - Ian M. McDonough
- Department of Psychology The University of Alabama Tuscaloosa Alabama 35487
| | - Jessica S. Mendoza
- Department of Psychology The University of Alabama Tuscaloosa Alabama 35487
| | | | - Tasnuva Enam
- Department of Psychology The University of Alabama Tuscaloosa Alabama 35487
| | - Catherine Reynolds
- Department of Psychology The University of Alabama Tuscaloosa Alabama 35487
| | - Benjamin C. Pody
- Department of Psychology The University of Alabama Tuscaloosa Alabama 35487
| |
Collapse
|
63
|
Liang P, Jiang JY, Liu Q, Zhang SL, Yang HJ. Mechanism of Cross-modal Information Influencing Taste. Curr Med Sci 2020; 40:474-479. [PMID: 32681252 DOI: 10.1007/s11596-020-2206-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 05/05/2020] [Indexed: 01/24/2023]
Abstract
Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level. The cross-modal sensory interaction and the neural network of information processing and its control were not fully explored and the mechanisms remain poorly understood. This mini review investigated the impact of uni-modal and multi-modal information on the taste perception, from the perspective of cognitive status, such as emotion, expectation and attention, and discussed the hypothesis that the cognitive status is the key step for visual sense to exert influence on taste. This work may help researchers better understand the mechanism of cross-modal information processing and further develop neutrally-based artificial intelligent (AI) system.
Collapse
Affiliation(s)
- Pei Liang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, 430062, China. .,Brain and Cognition Research Center, Faculty of Education, Hubei University, Wuhan, 430062, China.
| | - Jia-Yu Jiang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, 430062, China.,Brain and Cognition Research Center, Faculty of Education, Hubei University, Wuhan, 430062, China.,Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Qiang Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Su-Lin Zhang
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
| | - Hua-Jing Yang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| |
Collapse
|
64
|
Fleming JT, Noyce AL, Shinn-Cunningham BG. Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus. Neuropsychologia 2020; 146:107530. [PMID: 32574616 DOI: 10.1016/j.neuropsychologia.2020.107530] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 06/08/2020] [Accepted: 06/08/2020] [Indexed: 11/26/2022]
Abstract
In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.
Collapse
Affiliation(s)
- Justin T Fleming
- Speech and Hearing Bioscience and Technology Program, Division of Medical Sciences, Harvard Medical School, Boston, MA, USA
| | - Abigail L Noyce
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| | | |
Collapse
|
65
|
Zigiotto L, Damora A, Albini F, Casati C, Scrocco G, Mancuso M, Tesio L, Vallar G, Bolognini N. Multisensory stimulation for the rehabilitation of unilateral spatial neglect. Neuropsychol Rehabil 2020; 31:1410-1443. [PMID: 32558611 DOI: 10.1080/09602011.2020.1779754] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Unilateral spatial neglect (USN) is a neuropsychological syndrome, typically caused by lesions of the right hemisphere, whose features are the defective report of events occurring in the left (contralesional) side of space and the inability to orient and set up actions leftwards. Multisensory integration mechanisms, largely spared in USN patients, may temporally modulate spatial orienting. In this pilot study, the effects of an intensive audio-visual Multisensory Stimulation (MS) on USN were assessed, and compared with those of a treatment that ameliorates USN, Prismatic Adaptation (PA). Twenty USN stroke patients received a 2-week treatment (20 sessions, twice per day) of MS or PA. The effects of MS and PA were assessed by a set of neuropsychological clinical tests (target cancellation, line bisection, sentence reading, personal neglect, complex drawing) and the Catherine Bergego Scale for functional disability. Results showed that MS brought about an amelioration of USN deficits overall comparable to that induced by PA; personal neglect was improved only by MS, not by PA. The clinical gains of the MS treatment were not influenced by duration of disease and lesion volume, and they persisted up to one month post-treatment. In conclusion, MS represents a novel and promising rehabilitation procedure for USN.
Collapse
Affiliation(s)
- Luca Zigiotto
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Division of Neurosurgery, Santa Chiara Hospital, Trento, Italy
| | - Alessio Damora
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Tuscany Rehabilitation Clinic, Arezzo, Italy
| | - Federica Albini
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Clinical Neuropsychology Unit, Rehabilitation Department, S. Antonio Abate Hospital, Gallarate, Italy
| | - Carlotta Casati
- Laboratory of Neuropsychology, Istituto Auxologico Italiano, IRCCS, Milan, Italy.,Department of Neurorehabilitation Sciences, Istituto Auxologico Italiano, IRCCS, Milan, Italy
| | - Gessica Scrocco
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Tuscany Rehabilitation Clinic, Arezzo, Italy
| | - Mauro Mancuso
- Tuscany Rehabilitation Clinic, Arezzo, Italy.,Physical and Rehabilitative Medicine Unit, NHS South-Est Tuscany, Grossetto, Italy
| | - Luigi Tesio
- Department of Neurorehabilitation Sciences, Istituto Auxologico Italiano, IRCCS, Milan, Italy.,Department of Biomedical Sciences for Health, University of Milan, Milan, Italy
| | - Giuseppe Vallar
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Laboratory of Neuropsychology, Istituto Auxologico Italiano, IRCCS, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milan, Italy.,Laboratory of Neuropsychology, Istituto Auxologico Italiano, IRCCS, Milan, Italy
| |
Collapse
|
66
|
Individual differences in multiple object tracking, attentional cueing, and age account for variability in the capacity of audiovisual integration. Atten Percept Psychophys 2020; 82:3521-3543. [PMID: 32529573 DOI: 10.3758/s13414-020-02062-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There has been a recent increase in individual differences research within the field of audiovisual perception (Spence & Squire, 2003, Current Biology, 13(13), R519-R521), and furthering the understanding of audiovisual integration capacity with an individual differences approach is an important facet within this line of research. Across four experiments, participants were asked to complete an audiovisual integration capacity task (cf. Van der Burg, Awh, & Olivers, 2013, Psychological Science, 24(3), 345-351; Wilbiks & Dyson, 2016, PLOS ONE 11(12), e0168304; 2018, Journal of Experimental Psychology: Human Perception and Performance, 44(6), 871-884), along with differing combinations of additional perceptual tasks. Experiment 1 employed a multiple object tracking task and a visual working memory task. Experiment 2 compared performance on the capacity task with that of the Attention Network Test. Experiment 3 examined participants' focus in space through a Navon task and vigilance through time. Having completed this exploratory work, in Experiment 4 we collected data again from the tasks that were found to correlate significantly across the first three experiments and entered them into a regression model to predict capacity. The current research provides a preliminary explanation of the vast individual differences seen in audiovisual integration capacity in previous research, showing that by considering an individual's multiple object tracking span, focus in space, and attentional factors, we can account for up to 34.3% of the observed variation in capacity. Future research should seek to examine higher-level differences between individuals that may contribute to audiovisual integration capacity, including neurodevelopmental and mental health differences.
Collapse
|
67
|
Mühlberg S, Müller MM. Alignment of Continuous Auditory and Visual Distractor Stimuli Is Leading to an Increased Performance. Front Psychol 2020; 11:790. [PMID: 32457678 PMCID: PMC7225351 DOI: 10.3389/fpsyg.2020.00790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/31/2020] [Indexed: 12/02/2022] Open
Abstract
Information across different senses can affect our behavior in both positive and negative ways. Stimuli aligned with a target stimulus can lead to improved behavioral performances, while competing, transient stimuli often negatively affect our task performance. But what about subtle changes in task-irrelevant multisensory stimuli? Within this experiment we tested the effect of the alignment of subtle auditory and visual distractor stimuli on the performance of detection and discrimination tasks respectively. Participants performed either a detection or a discrimination task on a centrally presented Gabor patch, while being simultaneously subjected to a random dot kinematogram, which alternated its color from green to red with a frequency of 7.5 Hz and a continuous tone, which was either a frequency modulated pure tone for the audiovisual congruent and incongruent conditions or white noise for the visual control condition. While the modulation frequency of the pure tone initially differed from the modulation frequency of the random dot kinematogram, the modulation frequencies of both stimuli could align after a variable delay, and we measured accuracy and reaction times around the possible alignment time. We found increases in accuracy for the audiovisual congruent condition suggesting subtle alignments of multisensory background stimuli can increase performance on the current task.
Collapse
|
68
|
Keil J. Double Flash Illusions: Current Findings and Future Directions. Front Neurosci 2020; 14:298. [PMID: 32317920 PMCID: PMC7146460 DOI: 10.3389/fnins.2020.00298] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 03/16/2020] [Indexed: 11/29/2022] Open
Abstract
Twenty years ago, the first report on the sound-induced double flash illusion, a visual illusion induced by sound, was published. In this paradigm, participants are presented with different numbers of auditory and visual stimuli. In case of an incongruent number of auditory and visual stimuli, the influence of auditory information on visual perception can lead to the perception of the illusion. Thus, combining two auditory stimuli with one visual stimulus can induce the perception of two visual stimuli, the so-called fission illusion. Alternatively, combining one auditory stimulus with two visual stimuli can induce the perception of one visual stimulus, the so-called fusion illusion. Overall, current research shows that the illusion is a reliable indicator of multisensory integration. It has also been replicated using different stimulus combinations, such as visual and tactile stimuli. Importantly, the robustness of the illusion allows the widespread use for assessing multisensory integration across different groups of healthy participants and clinical populations and in various task setting. This review will give an overview of the experimental evidence supporting the illusion, the current state of research concerning the influence of cognitive processes on the illusion, the neural mechanisms underlying the illusion, and future research directions. Moreover, an exemplary experimental setup will be described with different options to examine perception, alongside code to test and replicate the illusion online or in the laboratory.
Collapse
Affiliation(s)
- Julian Keil
- Biological Psychology, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
| |
Collapse
|
69
|
Zuanazzi A, Noppeney U. The Intricate Interplay of Spatial Attention and Expectation: a Multisensory Perspective. Multisens Res 2020; 33:383-416. [DOI: 10.1163/22134808-20201482] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.
Collapse
Affiliation(s)
- Arianna Zuanazzi
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 2Department of Psychology, New York University, New York, NY, USA
| | - Uta Noppeney
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 3Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
70
|
Ciraolo MF, O’Hanlon SM, Robinson CW, Sinnett S. Stimulus Onset Modulates Auditory and Visual Dominance. Vision (Basel) 2020; 4:vision4010014. [PMID: 32121428 PMCID: PMC7157246 DOI: 10.3390/vision4010014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 02/09/2020] [Accepted: 02/21/2020] [Indexed: 12/05/2022] Open
Abstract
Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The experiments presented here use an oddball detection paradigm with variable stimulus timings to test the hypothesis that a stimulus that is presented earlier will be processed first and therefore contribute to sensory dominance. Additionally, we compared two measures of sensory dominance (slowdown scores and error rate) to determine whether the type of measure used can affect which modality appears to dominate. When stimuli were presented asynchronously, analysis of slowdown scores and error rates yielded the same result; for both the 1- and 3-button versions of the task, participants were more likely to show auditory dominance when the auditory stimulus preceded the visual stimulus, whereas evidence for visual dominance was observed as the auditory stimulus was delayed. In contrast, for the simultaneous condition, slowdown scores indicated auditory dominance, whereas error rates indicated visual dominance. Overall, these results provide empirical support for the hypothesis that the modality that engages processing first is more likely to show dominance, and suggest that more explicit measures of sensory dominance may favor the visual modality.
Collapse
Affiliation(s)
- Margeaux F. Ciraolo
- College of Health Solutions, Arizona State University, 550 N 3rd St., Phoenix, AZ 85004, USA
- Correspondence:
| | - Samantha M. O’Hanlon
- School of Psychological Science, Oregon State University, 2950 SW Jefferson Way, Corvallis, OR 97331, USA
| | - Christopher W. Robinson
- Department of Psychology, The Ohio State University at Newark, 1179 University Dr., Newark, OH 43055, USA;
| | - Scott Sinnett
- Department of Psychology, University of Hawai’i at Mānoa, 2530 Dole St., Sakamaki C400, Honolulu, HI 96822, USA;
| |
Collapse
|
71
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
72
|
Cue-target onset asynchrony modulates interaction between exogenous attention and audiovisual integration. Cogn Process 2020; 21:261-270. [PMID: 31953644 DOI: 10.1007/s10339-020-00950-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
Previous studies have shown that exogenous attention decreases audiovisual integration (AVI); however, whether the interaction between exogenous attention and AVI is influenced by cue-target onset asynchrony (CTOA) remains unclear. To clarify this matter, twenty participants were recruited to perform an auditory/visual discrimination task, and they were instructed to respond to the target stimuli as rapidly and accurately as possible. The analysis of the mean response times showed an effective cueing effect under all cued conditions and significant response facilitation for all audiovisual stimuli. A further comparison of the differences between the probability of audiovisual cumulative distributive functions (CDFs) and race model CDFs showed that the AVI latency was shortened under the cued condition relative to that under the no-cue condition, and there was a significant break point when the CTOA was 200 ms, with a decrease in the AVI upon going from 100 to 200 ms and an increase upon going from 200 to 400 ms. These results indicated different mechanisms for the interaction between exogenous attention and the AVI under the shorter and longer CTOA conditions and further suggested that there may be a temporal window in which the AVI effect is mainly affected by exogenous attention, but the interaction might be interfered with by endogenous attention when exceeding the temporal window.
Collapse
|
73
|
Császár N, Kapócs G, Bókkon I. A possible key role of vision in the development of schizophrenia. Rev Neurosci 2019; 30:359-379. [PMID: 30244235 DOI: 10.1515/revneuro-2018-0022] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 08/01/2018] [Indexed: 12/12/2022]
Abstract
Based on a brief overview of the various aspects of schizophrenia reported by numerous studies, here we hypothesize that schizophrenia may originate (and in part be performed) from visual areas. In other words, it seems that a normal visual system or at least an evanescent visual perception may be an essential prerequisite for the development of schizophrenia as well as of various types of hallucinations. Our study focuses on auditory and visual hallucinations, as they are the most prominent features of schizophrenic hallucinations (and also the most studied types of hallucinations). Here, we evaluate the possible key role of the visual system in the development of schizophrenia.
Collapse
Affiliation(s)
- Noemi Császár
- Gaspar Karoly University Psychological Institute, H-1091 Budapest, Hungary.,Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary
| | - Gabor Kapócs
- Buda Family Centred Mental Health Centre, Department of Psychiatry and Psychiatric Rehabilitation, St. John Hospital, Budapest, Hungary
| | - István Bókkon
- Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary.,Vision Research Institute, Neuroscience and Consciousness Research Department, 25 Rita Street, Lowell, MA 01854, USA
| |
Collapse
|
74
|
Pan F, Zhang L, Ou Y, Zhang X. The audio-visual integration effect on music emotion: Behavioral and physiological evidence. PLoS One 2019; 14:e0217040. [PMID: 31145745 PMCID: PMC6542535 DOI: 10.1371/journal.pone.0217040] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 05/05/2019] [Indexed: 11/29/2022] Open
Abstract
Previous research has indicated that, compared to audio-only presentation, audio-visual congruent presentation can lead to a more intense emotional response. In the present study, we investigated the audio-visual integration effect on emotions elicited by positive or negative music and the role of visual information presentation durations. The participants were presented with audio-only condition, audio-visual congruent condition, and audio-visual incongruent condition and then required to judge the intensity of emotional experience elicited by the music. Their emotional responses to the music were measured using self-ratings and physiological aspects, including heart rate, skin temperature, EMG root mean square and prefrontal EEG. Relative to the audio-only presentation, the audio-visual congruent presentation led to a more intense emotional response. More importantly, the audio-visual integration occurred both in the positive music and in the negative music. Furthermore, the audio-visual integration effect was larger for positive music than for negative music; meanwhile the audio-visual integration effect was strongest with the visual information presented within 80s for negative music, which indicated that this integration effect was more likely to occur in the negative music. These results suggest that when the music was positive, the effect of audio-visual integration was greater. When the music was negative, the modulation effect of the presentation durations of visual information on the music-induced emotion was more significant.
Collapse
Affiliation(s)
- Fada Pan
- School of Education Science, Nantong University, Nantong, China
- * E-mail:
| | - Li Zhang
- School of Education Science, Nantong University, Nantong, China
| | - Yuhong Ou
- School of Education Science, Nantong University, Nantong, China
| | - Xinni Zhang
- School of Education Science, Nantong University, Nantong, China
| |
Collapse
|
75
|
Tang X, Gao Y, Yang W, Ren Y, Wu J, Zhang M, Wu Q. Bimodal-divided attention attenuates visually induced inhibition of return with audiovisual targets. Exp Brain Res 2019; 237:1093-1107. [PMID: 30770958 DOI: 10.1007/s00221-019-05488-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 02/04/2019] [Indexed: 11/27/2022]
Abstract
Inhibition of return (IOR) refers to the slower response to a target appearing at a previously attended location in a cue-target paradigm. It has been greatly explored in the visual or auditory modality. This study investigates differences between the IOR of audiovisual targets and the IOR of visual targets under conditions of modality-specific selective attention (Experiment 1) and divided-modalities attention (Experiment 2). We employed an exogenous spatial cueing paradigm and manipulated the modalities of targets, including visual, auditory, or audiovisual modalities. The participants were asked to detect targets in visual modality or both visual and auditory modalities, which were presented on the same (cued) or opposite (uncued) side as the preceding visual peripheral cues. In Experiment 1, we found the comparable IOR with visual and audiovisual targets when participants were asked to selectively focus on visual modality. In Experiment 2, however, there was a smaller magnitude of IOR with audiovisual targets as compared with visual targets when paying attention to both visual and auditory modalities. We also observed a reduced multisensory response enhancement effect and race model inequality violation at cued locations relative to uncued locations. These results provide the first evidence of the IOR with audiovisual targets. Furthermore, IOR with audiovisual targets decreases when paying attention to both modalities. The interaction between exogenous spatial attention and audiovisual integration is discussed.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, 116029, China.
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, 130012, China
| | - Weiping Yang
- Department of Psychology, Hubei University, Wuhan, 430062, China
| | - Yanna Ren
- Department of Psychology, Guiyang University of Chinese Medicine, Guiyang, 550025, China
| | - Jinglong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan
- Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
- Key Laboratory of Biomimetic Robots and Systems, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, 100081, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, 215123, China.
| | - Qiong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| |
Collapse
|
76
|
Wahn B, Sinnett S. Shared or Distinct Attentional Resources? Confounds in Dual Task Designs, Countermeasures, and Guidelines. Multisens Res 2019; 32:145-163. [PMID: 31059470 DOI: 10.1163/22134808-20181328] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Accepted: 11/22/2018] [Indexed: 11/19/2022]
Abstract
Human information processing is limited by attentional resources. That is, via attentional mechanisms humans select information that is relevant for their goals, and discard other information. While limitations of attentional processing have been investigated extensively in each sensory modality, there is debate as to whether sensory modalities access shared resources, or if instead distinct resources are dedicated to individual sensory modalities. Research addressing this question has used dual task designs, with two tasks performed either in a single sensory modality or in two separate modalities. The rationale is that, if two tasks performed in separate sensory modalities interfere less or not at all compared to two tasks performed in the same sensory modality, then attentional resources are distinct across the sensory modalities. If task interference is equal regardless of whether tasks are performed in separate sensory modalities or the same sensory modality, then attentional resources are shared across the sensory modalities. Due to their complexity, dual task designs face many methodological difficulties. In the present review, we discuss potential confounds and countermeasures. In particular, we discuss 1) compound interference measures to circumvent problems with participants dividing attention unequally across tasks, 2) staircase procedures to match difficulty levels of tasks and counteracting problems with interpreting results, 3) choosing tasks that continuously engage participants to minimize issues arising from task switching, and 4) reducing motor demands to avoid sources of task interference, which are independent of the involved sensory modalities.
Collapse
Affiliation(s)
- Basil Wahn
- 1Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Scott Sinnett
- 2Department of Psychology, University of Hawai'i at Mānoa, Honolulu, HI, USA
| |
Collapse
|
77
|
Visually induced inhibition of return affects the audiovisual integration under different SOA conditions. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
78
|
自我信息识别优势——来自注意定向网络的证据. ACTA PSYCHOLOGICA SINICA 2018. [DOI: 10.3724/sp.j.1041.2018.01356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
79
|
|
80
|
Wan Y, Chen L. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation. Front Comput Neurosci 2018; 12:39. [PMID: 29922143 PMCID: PMC5996128 DOI: 10.3389/fncom.2018.00039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 05/16/2018] [Indexed: 11/18/2022] Open
Abstract
Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
Collapse
Affiliation(s)
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
81
|
Mühlberg S, Soto-Faraco S. Cross-modal decoupling in temporal attention between audition and touch. PSYCHOLOGICAL RESEARCH 2018; 83:1626-1639. [PMID: 29774432 DOI: 10.1007/s00426-018-1023-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Accepted: 05/04/2018] [Indexed: 10/16/2022]
Abstract
Temporal orienting leads to well-documented behavioural benefits for sensory events occurring at the anticipated moment. However, the consequences of temporal orienting in cross-modal contexts are still unclear. On the one hand, some studies using audio-tactile paradigms suggest that attentional orienting in time and modality are a closely coupled system, in which temporal orienting dominates modality orienting, similar to what happens in cross-modal spatial attention. On the other hand, recent findings using a visuo-tactile paradigm suggest that attentional orienting in time can unfold independently in each modality, leading to cross-modal decoupling. In the present study, we investigated if cross-modal decoupling in time can be extrapolated to audio-tactile contexts. If so, decoupling might represent a general property of cross-modal attention in time. To this end, we used a speeded discrimination task in which we manipulated the probability of target presentation in time and modality. In each trial, a manipulation of time-based expectancy was used to guide participants' attention to task-relevant events, either tactile or auditory, at different points in time. In two experiments, we found that participants generally showed enhanced behavioural performance at the most likely onset time of each modality and no evidence for coupling. This pattern supports the hypothesis that cross-modal decoupling could be a general phenomenon in temporal orienting.
Collapse
Affiliation(s)
- Stefanie Mühlberg
- Center of Brain and Cognition, Universitat Pompeu Fabra, Barcelona, 08018, Spain. .,Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, Edifici Mercè Rodoreda (Room 24.327), 08005, Barcelona, Spain.
| | - Salvador Soto-Faraco
- Center of Brain and Cognition, Universitat Pompeu Fabra, Barcelona, 08018, Spain.,ICREA, Institució Catalana de Recerca i Estudis Avançats, Barcelona, 08010, Spain
| |
Collapse
|
82
|
Pizzamiglio S, Abdalla H, Naeem U, Turner DL. Neural predictors of gait stability when walking freely in the real-world. J Neuroeng Rehabil 2018; 15:11. [PMID: 29486775 PMCID: PMC5830090 DOI: 10.1186/s12984-018-0357-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 02/16/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Gait impairments during real-world locomotion are common in neurological diseases. However, very little is currently known about the neural correlates of walking in the real world and on which regions of the brain are involved in regulating gait stability and performance. As a first step to understanding how neural control of gait may be impaired in neurological conditions such as Parkinson's disease, we investigated how regional brain activation might predict walking performance in the urban environment and whilst engaging with secondary tasks in healthy subjects. METHODS We recorded gait characteristics including trunk acceleration and brain activation in 14 healthy young subjects whilst they walked around the university campus freely (single task), while conversing with the experimenter and while texting with their smartphone. Neural spectral power density (PSD) was evaluated in three brain regions of interest, namely the pre-frontal cortex (PFC) and bilateral posterior parietal cortex (right/left PPC). We hypothesized that specific regional neural activation would predict trunk acceleration data obtained during the different walking conditions. RESULTS Vertical trunk acceleration was predicted by gait velocity and left PPC theta (4-7 Hz) band PSD in single-task walking (R-squared = 0.725, p = 0.001) and by gait velocity and left PPC alpha (8-12 Hz) band PSD in walking while conversing (R-squared = 0.727, p = 0.001). Medio-lateral trunk acceleration was predicted by left PPC beta (15-25 Hz) band PSD when walking while texting (R-squared = 0.434, p = 0.010). CONCLUSIONS We suggest that the left PPC may be involved in the processes of sensorimotor integration and gait control during walking in real-world conditions. Frequency-specific coding was operative in different dual tasks and may be developed as biomarkers of gait deficits in neurological conditions during performance of these types of, now commonly undertaken, dual tasks.
Collapse
Affiliation(s)
- Sara Pizzamiglio
- Neuroplasticity and Neurorehabilitation Doctoral Training Programme, Neurorehabilitation Unit, School of Health, Sport and Bioscience, College of Applied Health, University of East London, E15 4LZ, London, UK. .,School of Architecture, Computing and Engineering, University of East London, University Way, London, UK.
| | - Hassan Abdalla
- School of Architecture, Computing and Engineering, University of East London, University Way, London, UK
| | - Usman Naeem
- School of Architecture, Computing and Engineering, University of East London, University Way, London, UK
| | - Duncan L Turner
- Neuroplasticity and Neurorehabilitation Doctoral Training Programme, Neurorehabilitation Unit, School of Health, Sport and Bioscience, College of Applied Health, University of East London, E15 4LZ, London, UK.,UCLP Centre for Neurorehabilitation, London, UK
| |
Collapse
|
83
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
84
|
Regenbogen C, Seubert J, Johansson E, Finkelmeyer A, Andersson P, Lundström JN. The intraparietal sulcus governs multisensory integration of audiovisual information based on task difficulty. Hum Brain Mapp 2017; 39:1313-1326. [PMID: 29235185 DOI: 10.1002/hbm.23918] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 11/30/2017] [Accepted: 12/04/2017] [Indexed: 01/20/2023] Open
Abstract
Object recognition benefits maximally from multimodal sensory input when stimulus presentation is noisy, or degraded. Whether this advantage can be attributed specifically to the extent of overlap in object-related information, or rather, to object-unspecific enhancement due to the mere presence of additional sensory stimulation, remains unclear. Further, the cortical processing differences driving increased multisensory integration (MSI) for degraded compared with clear information remain poorly understood. Here, two consecutive studies first compared behavioral benefits of audio-visual overlap of object-related information, relative to conditions where one channel carried information and the other carried noise. A hierarchical drift diffusion model indicated performance enhancement when auditory and visual object-related information was simultaneously present for degraded stimuli. A subsequent fMRI study revealed visual dominance on a behavioral and neural level for clear stimuli, while degraded stimulus processing was mainly characterized by activation of a frontoparietal multisensory network, including IPS. Connectivity analyses indicated that integration of degraded object-related information relied on IPS input, whereas clear stimuli were integrated through direct information exchange between visual and auditory sensory cortices. These results indicate that the inverse effectiveness observed for identification of degraded relative to clear objects in behavior and brain activation might be facilitated by selective recruitment of an executive cortical network which uses IPS as a relay mediating crossmodal sensory information exchange.
Collapse
Affiliation(s)
- Christina Regenbogen
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Germany.,JARA - BRAIN Institute 1: Structure-Function Relationship: Decoding the Human Brain at systemic levels, Forschungszentrum Jülich, Germany
| | - Janina Seubert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Department of Neurobiology, Aging Research Center, Care Sciences and Society, Karolinska Institute and Stockholm University, Stockholm, Sweden
| | - Emilia Johansson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Finkelmeyer
- Institute of Neuroscience, Newcastle University, Newcastle-upon-Tyne, United Kingdom
| | - Patrik Andersson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Stockholm University Brain Imaging Centre, Stockholm University, Sweden
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Monell Chemical Senses Center, Philadelphia, Pennsylvania.,Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
85
|
Wahn B, König P. Can Limitations of Visuospatial Attention Be Circumvented? A Review. Front Psychol 2017; 8:1896. [PMID: 29163278 PMCID: PMC5665179 DOI: 10.3389/fpsyg.2017.01896] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 10/12/2017] [Indexed: 12/03/2022] Open
Abstract
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany.,Institut für Neurophysiologie und Pathophysiologie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
86
|
Dean CL, Eggleston BA, Gibney KD, Aligbe E, Blackwell M, Kwakye LD. Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task. PLoS One 2017; 12:e0179564. [PMID: 28723907 PMCID: PMC5516972 DOI: 10.1371/journal.pone.0179564] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 05/30/2017] [Indexed: 12/15/2022] Open
Abstract
The ability to synthesize information across multiple senses is known as multisensory integration and is essential to our understanding of the world around us. Sensory stimuli that occur close in time are likely to be integrated, and the accuracy of this integration is dependent on our ability to precisely discriminate the relative timing of unisensory stimuli (crossmodal temporal acuity). Previous research has shown that multisensory integration is modulated by both bottom-up stimulus features, such as the temporal structure of unisensory stimuli, and top-down processes such as attention. However, it is currently uncertain how attention alters crossmodal temporal acuity. The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity by utilizing a dual-task paradigm. In this study, participants were asked to judge the temporal order of a flash and beep presented at various temporal offsets (crossmodal temporal order judgment (CTOJ) task) while also directing their attention to a secondary distractor task in which they detected a target stimulus within a stream visual or auditory distractors. We found decreased performance on the CTOJ task as well as increases in both the positive and negative just noticeable difference with increasing load for both the auditory and visual distractor tasks. This strongly suggests that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity to process multisensory stimuli results in detriments to multisensory temporal processing. Our study is the first to demonstrate changes in multisensory temporal processing with decreased attentional capacity using a dual task paradigm and has strong implications for developmental disorders such as autism spectrum disorders and developmental dyslexia which are associated with alterations in both multisensory temporal processing and attention.
Collapse
Affiliation(s)
- Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Brady A. Eggleston
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Kyla David Gibney
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Enimielen Aligbe
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Marissa Blackwell
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Leslie Dowell Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
- * E-mail:
| |
Collapse
|
87
|
Mast F, Frings C, Spence C. Crossmodal attentional control sets between vision and audition. Acta Psychol (Amst) 2017; 178:41-47. [PMID: 28575705 DOI: 10.1016/j.actpsy.2017.05.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 03/03/2017] [Accepted: 05/25/2017] [Indexed: 11/19/2022] Open
Abstract
The interplay between top-down and bottom-up factors in attentional selection has been a topic of extensive research and controversy amongst scientists over the past two decades. According to the influential contingent capture hypothesis, a visual stimulus needs to match the feature(s) implemented into the current attentional control sets in order to be automatically selected. Recently, however, evidence has been presented that attentional control sets affect not only visual but also crossmodal selection. The aim of the present study was therefore to establish contingent capture as a general principle of multisensory selection. A non-spatial interference task with bimodal (visual and auditory) distractors and bimodal targets was used. The target and the distractors were presented in close temporal succession. In order to perform the task correctly, the participants only had to process a predefined target feature in either of the two modalities (e.g., colour when vision was the primary modality). Note that the additional crossmodal stimulation (e.g., a specific sound when hearing was the secondary modality) was not relevant for the selection of the correct response. Nevertheless, larger interference effects were observed when the distractor matched both the stimulus of the primary as well as the secondary modality and this pattern was even stronger if vision was the primary modality than if audition was the primary modality. These results are therefore in line with the crossmodal contingent capture hypothesis. Both visual and auditory early processing seem to be affected by top-down control sets even beyond the spatial dimension.
Collapse
Affiliation(s)
- Frank Mast
- University of Trier, Department of Psychology, D-54286, Germany
| | - Christian Frings
- University of Trier, Department of Psychology, D-54286, Germany.
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, United Kingdom
| |
Collapse
|
88
|
Woolgar A, Zopf R. Multisensory coding in the multiple-demand regions: vibrotactile task information is coded in frontoparietal cortex. J Neurophysiol 2017; 118:703-716. [PMID: 28404826 DOI: 10.1152/jn.00559.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 04/10/2017] [Accepted: 04/10/2017] [Indexed: 12/27/2022] Open
Abstract
At any given moment, our brains receive input from multiple senses. Successful behavior depends on our ability to prioritize the most important information and ignore the rest. A multiple-demand (MD) network of frontal and parietal regions is thought to support this process by adjusting to code information that is currently relevant (Duncan 2010). Accordingly, the network is proposed to encode a range of different types of information, including perceptual stimuli, task rules, and responses, as needed for the current cognitive operation. However, most MD research has used visual tasks, leaving limited information about whether these regions encode other sensory domains. We used multivoxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data to test whether the MD regions code the details of somatosensory stimuli, in addition to tactile-motor response transformation rules and button-press responses. Participants performed a stimulus-response task in which they discriminated between two possible vibrotactile frequencies and applied a stimulus-response transformation rule to generate a button-press response. For MD regions, we found significant coding of tactile stimulus, rule, and response. Primary and secondary somatosensory regions encoded the tactile stimuli and the button-press responses but did not represent task rules. Our findings provide evidence that MD regions can code nonvisual somatosensory task information, commensurate with a domain-general role in cognitive control.NEW & NOTEWORTHY How does the brain encode the breadth of information from our senses and use this to produce goal-directed behavior? A network of frontoparietal multiple-demand (MD) regions is implicated but has been studied almost exclusively in the context of visual tasks. We used multivariate pattern analysis of fMRI data to show that these regions encode tactile stimulus information, rules, and responses. This provides evidence for a domain-general role of the MD network in cognitive control.
Collapse
Affiliation(s)
- Alexandra Woolgar
- Perception in Action Research Centre and ARC Centre of Excellence in Cognition and Its Disorders, Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, Australia
| | - Regine Zopf
- Perception in Action Research Centre and ARC Centre of Excellence in Cognition and Its Disorders, Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
89
|
Yang NB, Tian Q, Fan Y, Bo QJ, Zhang L, Li L, Wang CY. Deficits of perceived spatial separation induced prepulse inhibition in patients with schizophrenia: relationships to symptoms and neurocognition. BMC Psychiatry 2017; 17:135. [PMID: 28399842 PMCID: PMC5387250 DOI: 10.1186/s12888-017-1276-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2016] [Accepted: 03/18/2017] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Prepulse inhibition (PPI) and attention were impaired, which may cause psychotic symptoms and (or) hinder the cognitive functions in schizophrenia. However, due to the measurement methods of PPI, findings about the relationship between PPI and clinical symptoms, cognitive performances have been equivocal. METHODS Seventy-five schizophrenia patients (SZ) and 50 healthy controls (HC) were assessed in a modified acoustic PPI paradigm, named perceived spatial separation-induced PPI (PSS-PPI), compared to perceived spatial co-location PPI (PSC-PPI) with inter-stimulus interval (ISI) of 120 ms. Repeatable Battery for the Assessment of Neuropsychological Status and the Stroop Color-Word Test were administered to all subjects. RESULTS Significant decrease in the modified PPI was found in the patients as compared to the controls, and effect sizes (Cohen'd) for patients vs. HCs % PPI levels achieved a significant level (PSC-PPI d = 0.84, PSS-PPI d = 1.27). A logistic regression model based on PSS-PPI significantly represented the diagnostic grouping (χ2= 29.3; p < 0 .001), with 85.2% area under ROC curve in predicting group membership. In addition, patients exhibited deficits in neurocognition. Among patients of "non-remission", after controlling for gender, age, education, duration, recurrence times, onset age, cigarettes per day and chlorpromazine equivalent dosage, PSS-PPI levels were associated with positive and negative symptoms, PANSS total and thought disorder (P1, P6, P7, N5, N7, G9). In multiple linear regression analyses, male and higher attention scores contributed to better PSC-PPI and PSS-PPI in controls group, while larger amount of smoke and longer word-color interfere time contributed to poor PSS-PPI. In patients' group, higher education and attention scores contributed to better PSS-PPI, while repeated relapse contributed to poor PSS-PPI. CONCLUSIONS The acoustic perceived spatial separation-induced PPIs may bring to light the psychopathological symptoms, especially for thought disorder, and the mechanism(s) of the novel PPI paradigm was associated with attention function.
Collapse
Affiliation(s)
- Ning-Bo Yang
- grid.24696.3fDepartment of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China
| | - Qing Tian
- grid.24696.3fDepartment of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China
| | - Yu Fan
- grid.24696.3fDepartment of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China
| | - Qi-Jing Bo
- grid.24696.3fDepartment of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China
| | - Liang Zhang
- grid.24696.3fDepartment of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China ,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088 China
| | - Liang Li
- grid.11135.37Department of Psychology, Peking University, Beijing, 100871 China ,grid.419897.aKey Laboratory on Machine Perception (Ministry of Education), Beijing, 100871 China ,McGovern Institute for Brain Research, Beijing, 100871 China
| | - Chuan-Yue Wang
- Department of Psychiatry, Beijing Anding Hospital, Capital Medical University, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088, China. .,Beijing Key Laboratory of Mental Disorders, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088, China. .,Beijing Institute for Brain Disorders Center of Schizophrenia, No.5 Ankang Lane, Dewai Avenue, Xicheng District, Beijing, 100088, China.
| |
Collapse
|
90
|
Wahn B, König P. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent? Adv Cogn Psychol 2017; 13:83-96. [PMID: 28450975 PMCID: PMC5405449 DOI: 10.5709/acp-0209-2] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Accepted: 01/04/2017] [Indexed: 11/23/2022] Open
Abstract
Human information processing is limited by attentional resources. That is, via
attentional mechanisms, humans select a limited amount of sensory input to
process while other sensory input is neglected. In multisensory research, a
matter of ongoing debate is whether there are distinct pools of attentional
resources for each sensory modality or whether attentional resources are shared
across sensory modalities. Recent studies have suggested that attentional
resource allocation across sensory modalities is in part task-dependent. That
is, the recruitment of attentional resources across the sensory modalities
depends on whether processing involves object-based attention
(e.g., the discrimination of stimulus attributes) or spatial
attention (e.g., the localization of stimuli). In the present
paper, we review findings in multisensory research related to this view. For the
visual and auditory sensory modalities, findings suggest that distinct resources
are recruited when humans perform object-based attention tasks, whereas for the
visual and tactile sensory modalities, partially shared resources are recruited.
If object-based attention tasks are time-critical, shared resources are
recruited across the sensory modalities. When humans perform an object-based
attention task in combination with a spatial attention task, partly shared
resources are recruited across the sensory modalities as well. Conversely, for
spatial attention tasks, attentional processing does consistently involve shared
attentional resources for the sensory modalities. Generally, findings suggest
that the attentional system flexibly allocates attentional resources depending
on task demands. We propose that such flexibility reflects a large-scale
optimization strategy that minimizes the brain’s costly resource expenditures
and simultaneously maximizes capability to process currently relevant
information.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück,
Germany
| | - Peter König
- Institut für Neurophysiologie und Pathophysiologie,
Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
91
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
92
|
Mas-Casadesús A, Gherri E. Ignoring Irrelevant Information: Enhanced Intermodal Attention in Synaesthetes. Multisens Res 2017; 30:253-277. [PMID: 31287079 DOI: 10.1163/22134808-00002566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 03/22/2017] [Indexed: 11/19/2022]
Abstract
Despite the fact that synaesthetes experience additional percepts during their inducer-concurrent associations that are often unrelated or irrelevant to their daily activities, they appear to be relatively unaffected by this potentially distracting information. This might suggest that synaesthetes are particularly good at ignoring irrelevant perceptual information coming from different sensory modalities. To investigate this hypothesis, the performance of a group of synaesthetes was compared to that of a matched non-synaesthete group in two different conflict tasks aimed at assessing participants' abilities to ignore irrelevant information. In order to match the sensory modality of the task-irrelevant distractors (vision) with participants' synaesthetic attentional filtering experience, we tested only synaesthetes experiencing at least one synaesthesia subtype triggering visual concurrents (e.g., grapheme-colour synaesthesia or sequence-space synaesthesia). Synaesthetes and controls performed a classic flanker task (FT) and a visuo-tactile cross-modal congruency task (CCT) in which they had to attend to tactile targets while ignoring visual distractors. While no differences were observed between synaesthetes and controls in the FT, synaesthetes showed reduced interference by the irrelevant distractors of the CCT. These findings provide the first direct evidence that synaesthetes might be more efficient than non-synaesthetes at dissociating conflicting information from different sensory modalities when the irrelevant modality correlates with their synaesthetic concurrent modality (here vision).
Collapse
Affiliation(s)
- Anna Mas-Casadesús
- School of Philosophy, Psychology, and Language Sciences, Department of Psychology, The University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| | - Elena Gherri
- School of Philosophy, Psychology, and Language Sciences, Department of Psychology, The University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| |
Collapse
|