1
|
Gagsch F, Valuch C, Albrecht T. Measuring attentional selection of object categories using hierarchical frequency tagging. J Vis 2024; 24:8. [PMID: 38990066 PMCID: PMC11246098 DOI: 10.1167/jov.24.7.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024] Open
Abstract
In the present study, we used Hierarchical Frequency Tagging (Gordon et al., 2017) to investigate in electroencephalography how different levels of the neural processing hierarchy interact with category-selective attention during visual object recognition. We constructed stimulus sequences of cyclic wavelet scrambled face and house stimuli at two different frequencies (f1 = 0.8 Hz and f2 = 1 Hz). For each trial, two stimulus sequences of different frequencies were superimposed and additionally augmented by a sinusoidal contrast modulation with f3 = 12.5 Hz. This allowed us to simultaneously assess higher level processing using semantic wavelet-induced frequency-tagging (SWIFT) and processing in earlier visual levels using steady-state visually evoked potentials (SSVEPs), along with their intermodulation (IM) components. To investigate the category specificity of the SWIFT signal, we manipulated the category congruence between target and distractor by superimposing two sequences containing stimuli from the same or different object categories. Participants attended to one stimulus (target) and ignored the other (distractor). Our results showed successful tagging of different levels of the cortical hierarchy. Using linear mixed-effects modeling, we detected different attentional modulation effects on lower versus higher processing levels. SWIFT and IM components were substantially increased for target versus distractor stimuli, reflecting attentional selection of the target stimuli. In addition, distractor stimuli from the same category as targets elicited stronger SWIFT signals than distractor stimuli from a different category indicating category-selective attention. In contrast, for IM components, this category-selective attention effect was largely absent, indicating that IM components probably reflect more stimulus-specific processing.
Collapse
Affiliation(s)
- Florian Gagsch
- Georg-Elias-Müller Institute for Psychology, Georg-August University, Göttingen, Germany
| | - Christian Valuch
- Georg-Elias-Müller Institute for Psychology, Georg-August University, Göttingen, Germany
| | - Thorsten Albrecht
- Georg-Elias-Müller Institute for Psychology, Georg-August University, Göttingen, Germany
| |
Collapse
|
2
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
3
|
Cao F, Zeng K, Zheng J, Yu L, Liu S, Zhang L, Xu Q. Neural response and representation: Facial expressions in scenes. Psychophysiology 2023; 60:e14184. [PMID: 36114680 DOI: 10.1111/psyp.14184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 08/22/2022] [Accepted: 08/29/2022] [Indexed: 01/25/2023]
Abstract
Previous studies have shown that the brain generates expectations based on scenes, which affects facial expression recognition. However, although facial expressions are known to interact with perception, the mechanism underlying this interaction remains poorly understood. Here, we used frequency labeling and decoding techniques to reveal the effects of scene-based expectation on the amplitude and representational strength of neural activity. We also reduced the relative reliability between expectation and sensory input by blurring facial expressions to further investigate the effects of this relative reliability on the pattern of neural activation and representation. Participants viewed emotional changes in unblurred or blurred facial expressions, which flickered at a rate of 6 Hz within a scene. We found that facial expressions that were congruent with the emotional significance of the scene elicited a larger steady-state visual evoked potential amplitude than did facial expressions that were incongruent with the emotional significance of a scene, in both unblurred and blurred conditions. We also found that expected facial expression representations were stronger than unexpected representations during the unblurred condition. In the blurred condition, unexpected representations were stronger than expected representations. Taken together, these results suggested that facial expression processing in the visual cortex is modulated by top-down signals. The relative reliability of expectation and sensory input moderated the influence of a scene on facial expression representation. Furthermore, our study showed that neural activation amplitudes did not correspond to representational strength.
Collapse
Affiliation(s)
- Feizhen Cao
- Department of Psychology, Ningbo University, Ningbo, China.,Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
| | - Ke Zeng
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Junmeng Zheng
- Department of Psychology, Ningbo University, Ningbo, China
| | - Linwei Yu
- Department of Psychology, Ningbo University, Ningbo, China
| | - Shen Liu
- Department of Psychology, School of Humanities and Social Sciences, Anhui Agricultural University, Hefei, China
| | - Lin Zhang
- Department of Psychology, Ningbo University, Ningbo, China
| | - Qiang Xu
- Department of Psychology, Ningbo University, Ningbo, China
| |
Collapse
|
4
|
Kritzman L, Eidelman-Rothman M, Keil A, Freche D, Sheppes G, Levit-Binnun N. Steady-state visual evoked potentials differentiate between internally and externally directed attention. Neuroimage 2022; 254:119133. [PMID: 35339684 DOI: 10.1016/j.neuroimage.2022.119133] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 03/21/2022] [Accepted: 03/21/2022] [Indexed: 12/26/2022] Open
Abstract
While attention to external visual stimuli has been extensively studied, attention directed internally towards mental contents (e.g., thoughts, memories) or bodily signals (e.g., breathing, heartbeat) has only recently become a subject of increased interest, due to its relation to interoception, contemplative practices and mental health. The present study aimed at expanding the methodological toolbox for studying internal attention, by examining for the first time whether the steady-state visual evoked potential (ssVEP), a well-established measure of attention, can differentiate between internally and externally directed attention. To this end, we designed a task in which flickering dots were used to generate ssVEPs, and instructed participants to count visual targets (external attention condition) or their heartbeats (internal attention condition). We compared the ssVEP responses between conditions, along with alpha-band activity and the heartbeat evoked potential (HEP) - two electrophysiological measures associated with internally directed attention. Consistent with our hypotheses, we found that both the magnitude and the phase synchronization of the ssVEP decreased when attention was directed internally, suggesting that ssVEP measures are able to differentiate between internal and external attention. Additionally, and in line with previous findings, we found larger suppression of parieto-occipital alpha-band activity and an increase of the HEP amplitude in the internal attention condition. Furthermore, we found a trade-off between changes in ssVEP response and changes in HEP and alpha-band activity: when shifting from internal to external attention, increase in ssVEP response was related to a decrease in parieto-occipital alpha-band activity and HEP amplitudes. These findings suggest that shifting between external and internal directed attention prompts a re-allocation of limited processing resources that are shared between external sensory and interoceptive processing.
Collapse
Affiliation(s)
- Lior Kritzman
- School of Psychological Sciences, Tel Aviv University, Israel; Sagol Center for Brain and Mind, Reichman University, Israel.
| | | | - Andreas Keil
- Center for the Study of Emotion & Attention, University of Florida, USA
| | - Dominik Freche
- Sagol Center for Brain and Mind, Reichman University, Israel; Physics of Complex Systems, Weizmann Institute of Science, Israel
| | - Gal Sheppes
- School of Psychological Sciences, Tel Aviv University, Israel
| | | |
Collapse
|
5
|
Alp N, Ozkan H. Neural correlates of integration processes during dynamic face perception. Sci Rep 2022; 12:118. [PMID: 34996892 PMCID: PMC8742062 DOI: 10.1038/s41598-021-02808-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Integrating the spatiotemporal information acquired from the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is particularly important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Here we present statistically robust observations regarding the brain activations measured via electroencephalography (EEG) that are specific to the temporal integration. To that end, we generate videos of neutral faces of individuals and non-face objects, modulate the contrast of the even and odd frames at two specific frequencies (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_1$$\end{document}f1 and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_2$$\end{document}f2) in an interlaced manner, and measure the steady-state visual evoked potential as participants view the videos. Then, we analyze the intermodulation components (IMs: (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$nf_1\pm mf_2$$\end{document}nf1±mf2), a linear combination of the fundamentals with integer multipliers) that consequently reflect the nonlinear processing and indicate temporal integration by design. We show that electrodes around the medial temporal, inferior, and medial frontal areas respond strongly and selectively when viewing dynamic faces, which manifests the essential processes underlying our ability to perceive and understand our social world. The generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, therefore, the strong IMs in our frequency spectrum analysis show that the time between frames (1/60 s) is sufficient for temporal integration.
Collapse
Affiliation(s)
- Nihan Alp
- Psychology, Sabanci University, Istanbul, Turkey.
| | - Huseyin Ozkan
- Electronics Engineering, Sabanci University, Istanbul, Turkey
| |
Collapse
|
6
|
Pitchaimuthu K, Dormal G, Sourav S, Shareef I, Rajendran SS, Ossandón JP, Kekunnaya R, Röder B. Steady state evoked potentials indicate changes in nonlinear neural mechanisms of vision in sight recovery individuals. Cortex 2021; 144:15-28. [PMID: 34562698 DOI: 10.1016/j.cortex.2021.08.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 06/15/2021] [Accepted: 08/05/2021] [Indexed: 11/25/2022]
Abstract
Humans with a transient phase of congenital pattern vision deprivation have been observed to feature prevailing deficits, particularly in higher order visual functions. However, the neural correlates of these prevalent visual impairments remain unclear. To probe different visual processing stages, we measured steady state visual evoked potentials (SSVEPs) generated by luminance flicker stimuli at 6.1 Hz, with superimposed horizontal periodic motion at 2.1 Hz or 2.4 Hz. SSVEP responses at the fundamental and second harmonic of luminance flicker frequency, and at their intermodulation frequencies with motion information, were analyzed. Three groups were tested: (1) 15 individuals who had suffered a lack of pattern vision from birth due to the presence of bilateral total congenital cataracts (CC group), which were surgically removed between 4 months and 22 years of age, (2) 13 individuals with reversed developmental i.e., later developing cataracts (DC group), and (3) normally sighted control participants (SC group; n = 13) matched in age and sex to the CC individuals. SSVEPs at the second harmonic frequency (i.e., 12.2 Hz) and at the intermodulation frequencies (8.2 Hz, and 8.5 Hz) were attenuated in the CC group. In contrast, fundamental frequency responses (i.e., at 6.1 Hz) were not significantly altered in the CC group compared to the control groups (SC and DC groups). Based on previous evidence on the role of striate vs. extrastriate generators of fundamental vs. second harmonics of SSVEPs, these results provide evidence for a stronger experience dependence of extrastriate than striate cortical processing, and furthermore, suggest a sensitive period for the development of putative nonlinear neural mechanisms hypothesized to mediate visual feature binding.
Collapse
Affiliation(s)
- Kabilan Pitchaimuthu
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany.
| | - Giulia Dormal
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Suddha Sourav
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Idris Shareef
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany; Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, L V Prasad Eye Institute, 500 034 Hyderabad, India
| | - Siddhart S Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany; Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, L V Prasad Eye Institute, 500 034 Hyderabad, India
| | - José Pablo Ossandón
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Ramesh Kekunnaya
- Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, L V Prasad Eye Institute, 500 034 Hyderabad, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| |
Collapse
|
7
|
Mersad K, Caristan C. Blending into the Crowd: Electrophysiological Evidence of Gestalt Perception of a Human Dyad. Neuropsychologia 2021; 160:107967. [PMID: 34303717 DOI: 10.1016/j.neuropsychologia.2021.107967] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 07/08/2021] [Accepted: 07/20/2021] [Indexed: 10/20/2022]
Abstract
Human faces and bodies are environmental stimuli of special importance that the brain processes with selective attention and a highly specialized visual system. It has been shown recently that the human brain also has dedicated networks for perception of pluralities of human bodies in synchronous motion or in face-to-face interaction. Here we show that a plurality of human bodies that are merely in close spatial proximity are automatically integrated into a coherent perceptual unit. We used an EEG frequency tagging technique allowing the dissociation of the brain activity related to the component parts of an image from the activity related to the global image configuration. We presented to participants images of two silhouettes flickering at different frequencies (5.88 vs. 7.14 Hz). Clear response at these stimulation frequencies reflected response to each part of the dyad. An emerging intermodulation component (7.14 + 5.88 = 13.02 Hz), a nonlinear response regarded as an objective signature of holistic representation, was significantly enhanced in the (typical) upright relative to an (altered) inverted position. Moreover, the inversion effect was significant for the intermodulation component but not for the stimulation frequencies, suggesting a trade-off between the processing of the global dyad configuration and that of the structural properties of the dyad elements. Our results show that when presented with two humans merely in close proximity the perceptual visual system will bind them. Hence the perception of the human form might be of a fundamentally different nature when it is part of a plurality.
Collapse
Affiliation(s)
- Karima Mersad
- Laboratoire Vision Action Cognition, Institut de Psychologie, Université de Paris, France.
| | - Céline Caristan
- Laboratoire Vision Action Cognition, Institut de Psychologie, Université de Paris, France
| |
Collapse
|
8
|
Almasi RC, Behrmann M. Subcortical regions of the human visual system do not process faces holistically. Brain Cogn 2021; 151:105726. [PMID: 33933856 DOI: 10.1016/j.bandc.2021.105726] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 03/25/2021] [Accepted: 03/26/2021] [Indexed: 11/30/2022]
Abstract
Face perception is considered to be evolutionarily adaptive and conserved across species. While subcortical visual brain areas are implicated in face perception based on existing evidence from phylogenetic and ontogenetic studies, whether these subcortical structures contribute to more complex visual computations such as the holistic processing (HP) of faces in humans is unknown. To address this issue, we used a well-established marker of HP, the composite face effect (CFE), with a group of adult human observers, and presented two sequential faces in a trial monocularly or interocularly using a Wheatstone stereoscope. HP refers to the finding that two identical top (or bottom) halves of a face are judged to be different when their task-irrelevant bottom (or top) halves belong to different faces. Because humans process faces holistically, they are unable to ignore the information from the irrelevant half of the composite face, and this is true to an even greater extent when the two halves of the faces are aligned compared with when they are misaligned ('Alignment effect'). The results revealed the HP effect and also uncovered the Alignment effect, a key marker of the CFE. The findings also indicated a monocular advantage, replicating the known subcortical contribution to face perception. There was, however, no statistically significant difference in the CFE when the images were presented in the monocular versus interocular conditions. These findings indicate that HP is not necessarily mediated by the subcortical visual pathway, and suggest that further investigation of cortical, rather than subcortical, structures might advance our understanding of HP and its role in face processing.
Collapse
Affiliation(s)
- Rebeka C Almasi
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
9
|
Cai Y, Mao Y, Ku Y, Chen J. Holistic Integration in the Processing of Chinese Characters as Revealed by Electroencephalography Frequency Tagging. Perception 2021; 49:658-671. [PMID: 32552487 DOI: 10.1177/0301006620929197] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
It is debated whether perceptual expertise of nonface objects, such as visual words, is indicated by holistic processing, which is regarded as a marker of perceptual expertise of faces. We address this question by frequency-tagged electroencephalography. Different parts of real or pseudo Chinese characters are presented at distinctive frequencies (6 or 7.2 Hz), which induce frequency-tagged steady-state visual-evoked potentials at occipital brain areas. The intermodulation response (e.g., 6 + 7.2 = 13.2 Hz) would emerge when holistic integration takes place. Our results suggest that the intermodulation response to the real characters is left lateralized, which is contralateral to previous findings with faces. Furthermore, at the left occipital area, the intermodulation response to real characters is more prominent than pseudo characters, suggesting that holistic integration is enhanced for real characters than for pseudo ones. Taken together, our findings suggest that holistic integration is potentially a general expertise marker for both faces and non-face objects.
Collapse
Affiliation(s)
- Yazhi Cai
- College of Foreign Languages and Literatures, Fudan University, Shanghai, China
| | - Yudi Mao
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yixuan Ku
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Jing Chen
- School of Psychology, Shanghai University of Sport, Shanghai, China
| |
Collapse
|
10
|
Vettori S, Van der Donck S, Nys J, Moors P, Van Wesemael T, Steyaert J, Rossion B, Dzhelyova M, Boets B. Combined frequency-tagging EEG and eye-tracking measures provide no support for the "excess mouth/diminished eye attention" hypothesis in autism. Mol Autism 2020; 11:94. [PMID: 33228763 PMCID: PMC7686749 DOI: 10.1186/s13229-020-00396-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/02/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Scanning faces is important for social interactions. Difficulty with the social use of eye contact constitutes one of the clinical symptoms of autism spectrum disorder (ASD). It has been suggested that individuals with ASD look less at the eyes and more at the mouth than typically developing (TD) individuals, possibly due to gaze aversion or gaze indifference. However, eye-tracking evidence for this hypothesis is mixed. While gaze patterns convey information about overt orienting processes, it is unclear how this is manifested at the neural level and how relative covert attention to the eyes and mouth of faces might be affected in ASD. METHODS We used frequency-tagging EEG in combination with eye tracking, while participants watched fast flickering faces for 1-min stimulation sequences. The upper and lower halves of the faces were presented at 6 Hz and 7.5 Hz or vice versa in different stimulation sequences, allowing to objectively disentangle the neural saliency of the eyes versus mouth region of a perceived face. We tested 21 boys with ASD (8-12 years old) and 21 TD control boys, matched for age and IQ. RESULTS Both groups looked longer at the eyes than the mouth, without any group difference in relative fixation duration to these features. TD boys looked significantly more to the nose, while the ASD boys looked more outside the face. EEG neural saliency data partly followed this pattern: neural responses to the upper or lower face half were not different between groups, but in the TD group, neural responses to the lower face halves were larger than responses to the upper part. Face exploration dynamics showed that TD individuals mostly maintained fixations within the same facial region, whereas individuals with ASD switched more often between the face parts. LIMITATIONS Replication in large and independent samples may be needed to validate exploratory results. CONCLUSIONS Combined eye-tracking and frequency-tagged neural responses show no support for the excess mouth/diminished eye gaze hypothesis in ASD. The more exploratory face scanning style observed in ASD might be related to their increased feature-based face processing style.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium.
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium.
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Jannes Nys
- Department of Physics and Astronomy, Ghent University, Ghent, Belgium
- IDLab - Department of Computer Science, University of Antwerp - IMEC, Antwerp, Belgium
| | - Pieter Moors
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Tim Van Wesemael
- Department of Electrical Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
- CNRS, CRAN - UMR 7039, Université de Lorraine, 54000, Nancy, France
- CHRU-Nancy, Service de Neurologie, Université de Lorraine, 54000, Nancy, France
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| |
Collapse
|
11
|
Wang L, Han D, Qian B, Zhang Z, Zhang Z, Liu Z. The Validity of Steady-State Visual Evoked Potentials as Attention Tags and Input Signals: A Critical Perspective of Frequency Allocation and Number of Stimuli. Brain Sci 2020; 10:brainsci10090616. [PMID: 32906625 PMCID: PMC7563221 DOI: 10.3390/brainsci10090616] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 09/01/2020] [Accepted: 09/03/2020] [Indexed: 01/23/2023] Open
Abstract
Steady-state visual evoked potential (SSVEP) is a periodic response to a repetitive visual stimulus at a specific frequency. Currently, SSVEP is widely treated as an attention tag in cognitive activities and is used as an input signal for brain-computer interfaces (BCIs). However, whether SSVEP can be used as a reliable indicator has been a controversial issue. We focused on the independence of SSVEP from frequency allocation and number of stimuli. First, a cue-target paradigm was adopted to examine the interaction between SSVEPs evoked by two stimuli with different frequency allocations under different attention conditions. Second, we explored whether signal strength and the performance of SSVEP-based BCIs were affected by the number of stimuli. The results revealed that no significant interaction of SSVEP responses appeared between attended and unattended stimuli under various frequency allocations, regardless of their appearance in the fundamental or second-order harmonic. The amplitude of SSVEP suffered no significant gain or loss under different numbers of stimuli, but the performance of SSVEP-based BCIs varied along with duration of stimuli; that is, the recognition rate was not affected by the number of stimuli when the duration of stimuli was long enough, while the information transfer rate (ITR) presented the opposite trend. It can be concluded that SSVEP is a reliable tool for marking and monitoring multiple stimuli simultaneously in cognitive studies, but much caution should be taken when choosing a suitable duration and the number of stimuli, in order to achieve optimal utility of BCIs in the future.
Collapse
Affiliation(s)
- Lu Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, China; (L.W.); (D.H.); (B.Q.); (Z.Z.)
| | - Dan Han
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, China; (L.W.); (D.H.); (B.Q.); (Z.Z.)
| | - Binbin Qian
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, China; (L.W.); (D.H.); (B.Q.); (Z.Z.)
| | - Zhenhao Zhang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, China; (L.W.); (D.H.); (B.Q.); (Z.Z.)
| | - Zhijun Zhang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, China; (L.W.); (D.H.); (B.Q.); (Z.Z.)
- Correspondence: ; Tel.: +86-571-88273337
| | - Zhifang Liu
- Department of Psychology and Special Education, Hangzhou Normal University, Hangzhou 311121, China;
| |
Collapse
|
12
|
Li L, Ito S, Yotsumoto Y. Effect of change saliency and neural entrainment on flicker-induced time dilation. J Vis 2020; 20:15. [PMID: 32574359 PMCID: PMC7416891 DOI: 10.1167/jov.20.6.15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
When a visual stimulus flickers periodically and rhythmically, the perceived duration tends to exceed its physical duration in the peri-second range. Although flicker-induced time dilation is a robust time illusion, its underlying neural mechanisms remain inconclusive. The neural entrainment account proposes that neural entrainment of the exogenous visual stimulus, marked by steady-state visual evoked potentials (SSVEPs) over the visual cortex, is the cause of time dilation. By contrast, the saliency account argues that the conscious perception of flicker changes is indispensable. In the current study, we examined these two accounts separately. The first two experiments manipulated the level of saliency around the critical fusion threshold (CFF) in a duration discrimination task to probe the effect of change saliency. The amount of dilation correlated with the level of change saliency. The next two experiments investigated whether neural entrainment alone could also induce perceived dilation. To preclude change saliency, we utilized a combination of two high-frequency flickers above the CFF, whereas their beat frequency still theoretically aroused neural entrainment at a low frequency. Results revealed a moderate time dilation induced by combinative high-frequency flickers. Although behavioral results suggested neural entrainment engagement, electroencephalography showed neither larger power nor inter-trial coherence (ITC) at the beat. In summary, change saliency was the most critical factor determining the perception and strength of time dilation, whereas neural entrainment had a moderate influence. These results highlight the influence of higher-level visual processing on time perception.
Collapse
|
13
|
Radtke EL, Schöne B, Martens U, Gruber T. Electrophysiological correlates of gist perception: a steady-state visually evoked potentials study. Exp Brain Res 2020; 238:1399-1410. [PMID: 32363553 PMCID: PMC7286871 DOI: 10.1007/s00221-020-05819-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/21/2020] [Indexed: 01/23/2023]
Abstract
Gist perception refers to perceiving the substance or general meaning of a scene. To investigate its neuronal mechanisms, we used the steady-state visually evoked potential (SSVEP) method—an evoked oscillatory cortical response at the same frequency as a visual stimulus flickered at this frequency. Two neighboring stimuli were flickered at different frequencies f1 and f2, for example, a drawing of a sun on the left side of the screen flickering at 8.6 Hz and the drawing of a parasol on the right side of the screen flickering at 12 Hz. SSVEPs enabled us to separate the responses to the two distinct stimuli by extracting oscillatory brain responses at f1 and f2. Additionally, it allowed to investigate intermodulation frequencies, that is, the brain’s response at a linear combination of f1 and f2 (here at f1 + f2 = 20.6 Hz) as an indicator of processing shared aspects of the input, that is, gist perception (here: a beach scene). We recorded high-density EEG of 18 participants. Results revealed clear and separable neuronal oscillations at f1 and f2. Additionally, occipital electrodes showed increased amplitudes at the intermodulation frequency in related as compared to unrelated pairs. The increase in intermodulation frequency was associated with bilateral temporal and parietal lobe activation, probably reflecting the interaction of local object representations as a basis for activating the gist network. The study demonstrates that SSVEPs are an excellent method to unravel mechanisms underlying the processing within multi-stimulus displays in the context of gist perception.
Collapse
Affiliation(s)
- Elise L Radtke
- Institute of Psychology, Osnabrück University, Seminarstraße 20, 49074, Osnabrück, Germany.
| | - Benjamin Schöne
- Institute of Psychology, Osnabrück University, Seminarstraße 20, 49074, Osnabrück, Germany
| | - Ulla Martens
- DRK-Norddeutsches Epilepsiezentrum für Kinder und Jugendliche, Henry-Dunant-Str. 6-10, 24223, Schwentinental, Germany
| | - Thomas Gruber
- Institute of Psychology, Osnabrück University, Seminarstraße 20, 49074, Osnabrück, Germany
| |
Collapse
|
14
|
Vettori S, Dzhelyova M, Van der Donck S, Jacques C, Steyaert J, Rossion B, Boets B. Frequency-Tagging Electroencephalography of Superimposed Social and Non-Social Visual Stimulation Streams Reveals Reduced Saliency of Faces in Autism Spectrum Disorder. Front Psychiatry 2020; 11:332. [PMID: 32411029 PMCID: PMC7199527 DOI: 10.3389/fpsyt.2020.00332] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 04/02/2020] [Indexed: 12/30/2022] Open
Abstract
Individuals with autism spectrum disorder (ASD) have difficulties with social communication and interaction. The social motivation hypothesis states that a reduced interest in social stimuli may partly underlie these difficulties. Thus far, however, it has been challenging to quantify individual differences in social orientation and interest, and to pinpoint the neural underpinnings of it. In this study, we tested the neural sensitivity for social versus non-social information in 21 boys with ASD (8-12 years old) and 21 typically developing (TD) control boys, matched for age and IQ, while children were engaged in an orthogonal task. We recorded electroencephalography (EEG) during fast periodic visual stimulation (FPVS) of social versus non-social stimuli to obtain an objective implicit neural measure of relative social bias. Streams of variable images of faces and houses were superimposed, and each stream of stimuli was tagged with a particular presentation rate (i.e., 6 and 7.5 Hz or vice versa). This frequency-tagging method allows disentangling the respective neural responses evoked by the different streams of stimuli. Moreover, by using superimposed stimuli, we controlled for possible effects of preferential looking, spatial attention, and disengagement. Based on four trials of 60 s, we observed a significant three-way interaction. In the control group, the frequency-tagged neural responses to faces were larger than those to houses, especially in lateral occipito-temporal channels, while the responses to houses were larger over medial occipital channels. In the ASD group, however, faces and houses did not elicit significantly different neural responses in any of the regions. Given the short recording time of the frequency-tagging paradigm with multiple simultaneous inputs and the robustness of the individual responses, the method could be used as a sensitive marker of social preference in a wide range of populations, including younger and challenging populations.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Corentin Jacques
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
- Université de Lorraine, CNRS, CRAN-UMR 7039, Nancy, France
- Université de Lorraine, CHRU-Service de Neurologie, Nancy, France
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| |
Collapse
|
15
|
Behrmann M, Plaut DC. Hemispheric Organization for Visual Object Recognition: A Theoretical Account and Empirical Evidence. Perception 2020; 49:373-404. [PMID: 31980013 PMCID: PMC9944149 DOI: 10.1177/0301006619899049] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.
Collapse
Affiliation(s)
- Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - David C. Plaut
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
16
|
Varlet M, Nozaradan S, Nijhuis P, Keller PE. Neural tracking and integration of ‘self’ and ‘other’ in improvised interpersonal coordination. Neuroimage 2020; 206:116303. [DOI: 10.1016/j.neuroimage.2019.116303] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2019] [Revised: 10/10/2019] [Accepted: 10/18/2019] [Indexed: 11/24/2022] Open
|
17
|
Vettori S, Dzhelyova M, Van der Donck S, Jacques C, Van Wesemael T, Steyaert J, Rossion B, Boets B. Combined frequency-tagging EEG and eye tracking reveal reduced social bias in boys with autism spectrum disorder. Cortex 2019; 125:135-148. [PMID: 31982699 DOI: 10.1016/j.cortex.2019.12.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 09/19/2019] [Accepted: 12/16/2019] [Indexed: 12/12/2022]
Abstract
Developmental accounts of autism spectrum disorder (ASD) state that infants and children with ASD are spontaneously less attracted by and less proficient in processing social stimuli such as faces. This is hypothesized to partly underlie social communication difficulties in ASD. While in some studies a reduced preference for social stimuli has been shown in individuals with ASD, effect sizes are moderate and vary across studies, stimuli, and designs. Eye tracking, often the methodology of choice to study social preference, conveys information about overt orienting processes but conceals covert attention, possibly resulting in an underestimation of the effects. In this study, we recorded eye tracking and electroencephalography (EEG) during fast periodic visual stimulation to address this issue. We tested 21 boys with ASD (8-12 years old) and 21 typically developing (TD) control boys, matched for age and IQ. Streams of variable images of faces were presented at 6 Hz alongside images of houses presented at 7.5 Hz or vice versa, while children were engaged in an orthogonal task. While frequency-tagged neural responses were larger in response to faces than simultaneously presented houses in both groups, this effect was much larger in TD boys than in boys with ASD. This group difference in saliency of social versus non-social processing is significant after 5 sec of stimulus presentation and holds throughout the entire trial. Although there was no interaction between group and stimulus category for simultaneously recorded eye-tracking data, eye tracking and EEG measures were strongly correlated. We conclude that frequency-tagging EEG, allowing monitoring of both overt and covert processes, provides a fast, objective and reliable measure of decreased preference for social information in ASD.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Belgium; Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium.
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium; Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Belgium
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Belgium; Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Corentin Jacques
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Belgium; Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Belgium
| | - Tim Van Wesemael
- Department of Electrical Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Belgium; Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Belgium; Université de Lorraine, CNRS, CRAN - UMR 7039, F-54000, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000, France
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Belgium; Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| |
Collapse
|
18
|
Does right hemisphere superiority sufficiently explain the left visual field advantage in face recognition? Atten Percept Psychophys 2019; 82:1205-1220. [DOI: 10.3758/s13414-019-01896-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Montani V, Chanoine V, Grainger J, Ziegler JC. Frequency-tagged visual evoked responses track syllable effects in visual word recognition. Cortex 2019; 121:60-77. [PMID: 31550616 DOI: 10.1016/j.cortex.2019.08.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 06/11/2019] [Accepted: 08/11/2019] [Indexed: 01/05/2023]
Abstract
The processing of syllables in visual word recognition was investigated using a novel paradigm based on steady-state visual evoked potentials (SSVEPs). French words were presented to proficient readers in a delayed naming task. Words were split into two segments, the first of which was flickered at 18.75 Hz and the second at 25 Hz. The first segment either matched (congruent condition) or did not match (incongruent condition) the first syllable. The SSVEP responses in the congruent condition showed increased power compared to the responses in the incongruent condition, providing new evidence that syllables are important sublexical units in visual word recognition and reading aloud. With respect to the neural correlates of the effect, syllables elicited an early activation of a right hemisphere network. This network is typically associated with the programming of complex motor sequences, cognitive control and timing. Subsequently, responses were obtained in left hemisphere areas related to phonological processing.
Collapse
Affiliation(s)
- Veronica Montani
- Aix-Marseille University and CNRS, Brain and Language Research Institute, Marseille Cedex 3, France.
| | - Valérie Chanoine
- Aix-Marseille University, Institute of Language, Communication and the Brain, Brain and Language Research Institute, Aix-en-Provence, France
| | | | | |
Collapse
|
20
|
Gordon N, Hohwy J, Davidson MJ, van Boxtel JJA, Tsuchiya N. From intermodulation components to visual perception and cognition-a review. Neuroimage 2019; 199:480-494. [PMID: 31173903 DOI: 10.1016/j.neuroimage.2019.06.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 04/15/2019] [Accepted: 06/03/2019] [Indexed: 01/27/2023] Open
Abstract
Perception results from complex interactions among sensory and cognitive processes across hierarchical levels in the brain. Intermodulation (IM) components, used in frequency tagging neuroimaging designs, have emerged as a promising direct measure of such neural interactions. IMs have initially been used in electroencephalography (EEG) to investigate low-level visual processing. In a more recent trend, IMs in EEG and other neuroimaging methods are being used to shed light on mechanisms of mid- and high-level perceptual processes, including the involvement of cognitive functions such as attention and expectation. Here, we provide an account of various mechanisms that may give rise to IMs in neuroimaging data, and what these IMs may look like. We discuss methodologies that can be implemented for different uses of IMs and we demonstrate how IMs can provide insights into the existence, the degree and the type of neural integration mechanisms at hand. We then review a range of recent studies exploiting IMs in visual perception research, placing an emphasis on high-level vision and the influence of awareness and cognition on visual processing. We conclude by suggesting future directions that can enhance the benefits of IM-methodology in perception research.
Collapse
Affiliation(s)
- Noam Gordon
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton VIC, 3800, Australia.
| | - Jakob Hohwy
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton VIC, 3800, Australia
| | - Matthew James Davidson
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia
| | - Jeroen J A van Boxtel
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia; School of Psychology, Faculty of Health, University of Canberra, Canberra, Australia
| | - Naotsugu Tsuchiya
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton VIC, 3800, Australia; School of Psychological Sciences, Monash University, Clayton VIC, 3800, Australia; ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan; Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka 565-0871, Japan
| |
Collapse
|
21
|
de Vries E, Baldauf D. Attentional Weighting in the Face Processing Network: A Magnetic Response Image-guided Magnetoencephalography Study Using Multiple Cyclic Entrainments. J Cogn Neurosci 2019; 31:1573-1588. [PMID: 31112470 DOI: 10.1162/jocn_a_01428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We recorded magnetoencephalography using a neural entrainment paradigm with compound face stimuli that allowed for entraining the processing of various parts of a face (eyes, mouth) as well as changes in facial identity. Our magnetic response image-guided magnetoencephalography analyses revealed that different subnodes of the human face processing network were entrained differentially according to their functional specialization. Whereas the occipital face area was most responsive to the rate at which face parts (e.g., the mouth) changed, and face patches in the STS were mostly entrained by rhythmic changes in the eye region, the fusiform face area was the only subregion that was strongly entrained by the rhythmic changes in facial identity. Furthermore, top-down attention to the mouth, eyes, or identity of the face selectively modulated the neural processing in the respective area (i.e., occipital face area, STS, or fusiform face area), resembling behavioral cue validity effects observed in the participants' RT and detection rate data. Our results show the attentional weighting of the visual processing of different aspects and dimensions of a single face object, at various stages of the involved visual processing hierarchy.
Collapse
|
22
|
Wittenhagen L, Mattingley JB. Steady-state visual evoked potentials reveal enhanced neural responses to illusory surfaces during a concurrent visual attention task. Cortex 2019; 117:217-227. [PMID: 30999213 DOI: 10.1016/j.cortex.2019.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Revised: 11/30/2018] [Accepted: 03/15/2019] [Indexed: 11/16/2022]
Abstract
Under natural viewing conditions, visual stimuli are often obscured by occluding surfaces. To aid object recognition, the visual system actively reconstructs the missing information, as exemplified in the classic Kanizsa illusion, a phenomenon termed "modal completion". Single-cell recordings in monkeys have shown that neurons in early visual cortex respond to illusory contours, but it has proven difficult to measure the neural correlates of modal completion in humans. We used electroencephalography (EEG) to measure steady-state visual-evoked potentials (SSVEPs) from disks with quarter segments removed to induce an illusory shape (or rotated to eliminate the illusory square in control trials). Opposing pairs of inducers were tagged with one of two flicker frequencies (2.5 or 4 Hz). During stimulus presentations, participants performed an attention task at fixation that required them to judge the orientation of a briefly flashed central bar while ignoring congruent (same orientation) or incongruent (different orientation) flanker bars that appeared on or off the illusory surface. Importantly, the occurrence of any illusory shape was never task relevant. Frequency-based analyses revealed that SSVEP amplitudes were reliably enhanced for trials in which an illusory square appeared, relative to control trials, at 4, 5 and 8 Hz and at an intermodulation frequency of 13 Hz. Participants' reaction times in the flanker task were significantly slower for incongruent versus congruent trials, and this distractor interference effect occurred only in the presence of an illusory surface and not in the control condition. Our results reveal a robust neural correlate of modal completion in the human visual system and provide evidence that visual completion can affect attentional control processes as deployed in a flanker task.
Collapse
Affiliation(s)
- Lisa Wittenhagen
- The University of Queensland, Queensland Brain Institute, St Lucia, QLD, Australia.
| | - Jason B Mattingley
- The University of Queensland, Queensland Brain Institute, St Lucia, QLD, Australia; The University of Queensland, School of Psychology, St Lucia, QLD, Australia; Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| |
Collapse
|
23
|
How the visual brain detects emotional changes in facial expressions: Evidence from driven and intrinsic brain oscillations. Cortex 2018; 111:35-50. [PMID: 30447483 DOI: 10.1016/j.cortex.2018.10.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 09/01/2018] [Accepted: 10/08/2018] [Indexed: 12/26/2022]
Abstract
The processing of facial expressions is often studied using static pictorial cues. Recent work, however, suggests that viewing changing expressions more robustly evokes physiological responses. Here, we examined the sensitivity of steady-state visual evoked potentials and intrinsic oscillatory brain activity to transient emotional changes in facial expressions. Twenty-two participants viewed sequences of grayscale faces periodically turned on and off at a rate of 17.5 Hz, to evoke flicker steady-state visual evoked potentials (ssVEPs) in visual cortex. Each sequence began with a neutral face (flickering for 2290 msec), immediately followed by a face from the same actor (also flickering for 2290 msec) with one of four expressions (happy, angry, fearful, or another neutral expression), followed by the initially presented neutral face (flickering for 1140 msec). The amplitude of the ssVEP and the power of intrinsic brain oscillations were analyzed, comparing the four expression-change conditions. We found a transient perturbation (reduction) of the ssVEP that was more pronounced after the neutral-to-angry change compared to the other conditions, at right posterior sensors. Induced alpha-band (8-13 Hz) power was reduced compared to baseline after each change. This reduction showed a central-occipital topography and was strongest in the subtlest and rarest neutral-to-neutral condition. Thus, the ssVEP indexed involvement of face-sensitive cortical areas in decoding affective expressions, whereas mid-occipital alpha power reduction reflected condition frequency rather than expression-specific processing, consistent with the role of alpha power changes in selective attention.
Collapse
|
24
|
Measuring Integration Processes in Visual Symmetry with Frequency-Tagged EEG. Sci Rep 2018; 8:6969. [PMID: 29725022 PMCID: PMC5934372 DOI: 10.1038/s41598-018-24513-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 03/23/2018] [Indexed: 01/23/2023] Open
Abstract
Symmetry is a highly salient feature of the natural world which requires integration of visual features over space. The aim of the current work is to isolate dynamic neural correlates of symmetry-specific integration processes. We measured steady-state visual evoked potentials (SSVEP) as participants viewed symmetric patterns comprised of distinct spatial regions presented at two different frequencies (f1 and f2). We measured intermodulation components, shown to reflect non-linear processing at the neural level, indicating integration of spatially separated parts of the pattern. We generated a wallpaper pattern containing two reflection symmetry axes by tiling the plane with a two-fold reflection symmetric unit-pattern and split each unit-pattern diagonally into separate parts which could be presented at different frequencies. We compared SSVEPs measured for wallpapers and control patterns for which both images were equal in terms of translation and rotation symmetry but reflection symmetry could only emerge for the wallpaper pattern through integration of the image-pairs. We found that low-frequency intermodulation components differed between the wallpaper and control stimuli, indicating the presence of integration mechanisms specific to reflection symmetry. These results showed that spatial integration specific to symmetry perception can be isolated through a combination of stimulus design and the frequency tagging approach.
Collapse
|
25
|
Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex. J Neurosci 2017; 37:4942-4953. [PMID: 28411268 DOI: 10.1523/jneurosci.2370-16.2017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 03/23/2017] [Accepted: 03/28/2017] [Indexed: 11/21/2022] Open
Abstract
Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping.SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner.
Collapse
|
26
|
Wieser MJ, Miskovic V, Keil A. Steady-state visual evoked potentials as a research tool in social affective neuroscience. Psychophysiology 2016; 53:1763-1775. [PMID: 27699794 DOI: 10.1111/psyp.12768] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 09/06/2016] [Indexed: 11/29/2022]
Abstract
Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socioemotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socioemotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socioemotional perception, based on complex, quasinaturalistic viewing situations.
Collapse
Affiliation(s)
- Matthias J Wieser
- Institute of Psychology, Erasmus University Rotterdam, Rotterdam, Netherlands.,Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Vladimir Miskovic
- Department of Psychology, State University of New York at Binghamton, Binghamton, New York, USA
| | - Andreas Keil
- Department of Psychology, University of Florida, Gainesville, Florida, USA
| |
Collapse
|
27
|
Koenig-Robert R, VanRullen R, Tsuchiya N. Semantic Wavelet-Induced Frequency-Tagging (SWIFT) Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas. PLoS One 2015; 10:e0144858. [PMID: 26691722 PMCID: PMC4686956 DOI: 10.1371/journal.pone.0144858] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 11/23/2015] [Indexed: 11/19/2022] Open
Abstract
Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging), a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI), we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.
Collapse
Affiliation(s)
- Roger Koenig-Robert
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, Australia
- * E-mail: (RK); (NT)
| | - Rufin VanRullen
- CNRS, UMR5549, Centre de Recherche Cerveau et Cognition, Faculté de Médecine de Purpan, 31052 Toulouse, France
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, 31052 Toulouse, France
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, Australia
- Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo, Japan, 102–8266
- * E-mail: (RK); (NT)
| |
Collapse
|
28
|
Serino A, Sforza AL, Kanayama N, van Elk M, Kaliuzhna M, Herbelin B, Blanke O. Tuning of temporo-occipital activity by frontal oscillations during virtual mirror exposure causes erroneous self-recognition. Eur J Neurosci 2015. [DOI: 10.1111/ejn.13029] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Andrea Serino
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Anna Laura Sforza
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Noriaki Kanayama
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Michiel van Elk
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Bruno Herbelin
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics; École Polytechnique Fédérale de Lausanne; Lausanne Switzerland
- Laboratory of Cognitive Neuroscience; Brain Mind Institute; School of Life Sciences; École Polytechnique Fédérale de Lausanne; Station 19, SV 2805 Lausanne 1015 Switzerland
- Department of Neurology; University Hospital; Geneva Switzerland
| |
Collapse
|
29
|
Norcia AM, Appelbaum LG, Ales JM, Cottereau BR, Rossion B. The steady-state visual evoked potential in vision research: A review. J Vis 2015; 15:4. [PMID: 26024451 PMCID: PMC4581566 DOI: 10.1167/15.6.4] [Citation(s) in RCA: 539] [Impact Index Per Article: 59.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Accepted: 01/05/2015] [Indexed: 02/07/2023] Open
Abstract
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Collapse
|
30
|
Vakli P, Németh K, Zimmer M, Kovács G. The face evoked steady-state visual potentials are sensitive to the orientation, viewpoint, expression and configuration of the stimuli. Int J Psychophysiol 2014; 94:336-50. [DOI: 10.1016/j.ijpsycho.2014.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2014] [Revised: 10/02/2014] [Accepted: 10/12/2014] [Indexed: 10/24/2022]
|