1
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
2
|
Dou H, Lei Y, Pan Y, Li H, Astikainen P. Impact of observational and direct learning on fear conditioning generalization in humans. Prog Neuropsychopharmacol Biol Psychiatry 2023; 121:110650. [PMID: 36181957 DOI: 10.1016/j.pnpbp.2022.110650] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 09/12/2022] [Accepted: 09/25/2022] [Indexed: 11/28/2022]
Abstract
Humans gain knowledge about threats not only from their own experiences but also from observing others' behavior. A neutral stimulus is associated with a threat stimulus for several times and the neutral stimulus will evoke fear responses, which is known as fear conditioning. When encountering a new event that is similar to one previously associated with a threat, one may feel afraid and produce fear responses. This is called fear generalization. Previous studies have mostly focused on fear conditioning and generalization based on direct learning, but few have explored how observational fear learning affects fear conditioning and generalization. To the best of our knowledge, no previous study has focused on the neural correlations of fear conditioning and generalization based on observational learning. In the present study, 58 participants performed a differential conditioning paradigm in which they learned the associations between neutral cues (i.e., geometric figures) and threat stimuli (i.e., electric shock). The learning occurred on their own (i.e., direct learning) and by observing other participant's responses (i.e., observational learning); the study used a within-subjects design. After each learning condition, a fear generalization paradigm was conducted by each participant independently while their behavioral responses (i.e., expectation of a shock) and electroencephalography (EEG) recordings or responses were recorded. The shock expectancy ratings showed that observational learning, compared to direct learning, reduced the differentiation between the conditioned threatening stimuli and safety stimuli and the increased shock expectancy to the generalization stimuli. The EEG indicated that in fear learning, threatening conditioned stimuli in observational and direct learning increased early discrimination (P1) and late motivated attention (late positive potential [LPP]), compared with safety conditioned stimuli. In fear generalization, early discrimination, late motivated attention, and orienting attention (alpha-event-related desynchronization [alpha-ERD]) to generalization stimuli were reduced in the observational learning condition. These findings suggest that compared to direct learning, observational learning reduces differential fear learning and increases the generalization of fear, and this might be associated with reduced discrimination and attentional function related to generalization stimuli.
Collapse
Affiliation(s)
- Haoran Dou
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China; Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Yi Lei
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.
| | - Yafeng Pan
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Hong Li
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China; School of Psychology, South China Normal University, Guangzhou, China
| | - Piia Astikainen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
3
|
Li S, Ding R, Zhao D, Zhou X, Zhan B, Luo W. Processing of emotions expressed through eye regions attenuates attentional blink. Int J Psychophysiol 2022; 182:1-11. [DOI: 10.1016/j.ijpsycho.2022.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 07/13/2022] [Accepted: 07/27/2022] [Indexed: 10/16/2022]
|
4
|
Zhao S, Wang C, Feng C, Wang Y, Feng W. The interplay between audiovisual temporal synchrony and semantic congruency in the cross-modal boost of the visual target discrimination during the attentional blink. Hum Brain Mapp 2022; 43:2478-2494. [PMID: 35122347 PMCID: PMC9057096 DOI: 10.1002/hbm.25797] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 11/09/2022] Open
Abstract
The visual attentional blink can be substantially reduced by delivering a task-irrelevant sound synchronously with the second visual target (T2), and this effect is further modulated by the semantic congruency between the sound and T2. However, whether the cross-modal benefit originates from audiovisual interactions or sound-induced alertness remains controversial, and whether the semantic congruency effect is contingent on audiovisual temporal synchrony needs further investigation. The current study investigated these questions by recording event-related potentials (ERPs) in a visual attentional blink task wherein a sound could either synchronize with T2, precede T2 by 200 ms, be delayed by 100 ms, or be absent, and could be either semantically congruent or incongruent with T2 when delivered. The behavioral data showed that both the cross-modal boost of T2 discrimination and the further semantic modulation were the largest when the sound synchronized with T2. In parallel, the ERP data yielded that both the early occipital cross-modal P195 component (192-228 ms after T2 onset) and late parietal cross-modal N440 component (424-448 ms) were prominent only when the sound synchronized with T2, with the former being elicited solely when the sound was further semantically congruent whereas the latter occurring only when that sound was incongruent. These findings demonstrate not only that the cross-modal boost of T2 discrimination during the attentional blink stems from early audiovisual interactions and the semantic congruency effect depends on audiovisual temporal synchrony, but also that the semantic modulation can unfold at the early stage of visual discrimination processing.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Department of English, School of Foreign Languages, Soochow University, Suzhou, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
5
|
Luo C, Chen W, VanRullen R, Zhang Y, Gaspar CM. Nudging the N170 forward with prior stimulation-Bridging the gap between N170 and recognition potential. Hum Brain Mapp 2021; 43:1214-1230. [PMID: 34786780 PMCID: PMC8837586 DOI: 10.1002/hbm.25716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 10/26/2021] [Accepted: 10/27/2021] [Indexed: 11/25/2022] Open
Abstract
Evoked response potentials are often divided up into numerous components, each with their own body of literature. But is there less variety than we might suppose? In this study, we nudge one component into looking like another. Both the N170 and recognition potential (RP) are N1 components in response to familiar objects. However, the RP is often measured with a forward mask that ends at stimulus onset whereas the N170 is often measured with no masking at all. This study investigates how inter‐stimulus interval (ISI) may delay and distort the N170 into an RP by manipulating the temporal gap (ISI) between forward mask and target. The results revealed reverse relationships between the ISI on the one hand, and the N170 latency, single‐trial N1 jitter (an approximation of N1 width) and reaction time on the other hand. Importantly, we find that scalp topographies have a unique signature at the N1 peak across all conditions, from the longest gap (N170) to the shortest (RP). These findings prove that the mask‐delayed N1 is still the same N170, even under conditions that are normally associated with a different component like the RP. In general, our results suggest greater synthesis in the study of event related potential components.
Collapse
Affiliation(s)
- Canhuang Luo
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China.,Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France.,CerCo, CNRS UMR 5549, Toulouse, France
| | - Wei Chen
- Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - Rufin VanRullen
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France.,CerCo, CNRS UMR 5549, Toulouse, France
| | - Ye Zhang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Carl Michael Gaspar
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China.,Zayed University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
6
|
Zhao S, Feng C, Liao Y, Huang X, Feng W. Attentional blink suppresses both stimulus-driven and representation-driven cross-modal spread of attention. Psychophysiology 2021; 58:e13761. [PMID: 33400294 DOI: 10.1111/psyp.13761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 11/05/2020] [Accepted: 12/14/2020] [Indexed: 11/30/2022]
Abstract
Previous studies have shown that visual attention effect can spread to the task-irrelevant auditory modality automatically through either the stimulus-driven binding process or the representation-driven priming process. Using an attentional blink paradigm, the present study investigated whether the long-latency stimulus-driven and representation-driven cross-modal spread of attention would be inhibited or facilitated when the attentional resources operating at the post-perceptual stage of processing are inadequate, whereas ensuring all visual stimuli were spatially attended and the representations of visual target object categories were activated, which were previously thought to be the only endogenous prerequisites for triggering cross-modal spread of attention. The results demonstrated that both types of attentional spreading were completely suppressed during the attentional blink interval but were highly prominent outside the attentional blink interval, with the stimulus-driven process being independent of, whereas the representation-driven process being dependent on, audiovisual semantic congruency. These findings provide the first evidence that the occurrences of both stimulus-driven and representation-driven spread of attention are contingent on the amount of post-perceptual attentional resources responsible for the late consolidation processing of visual stimuli, whereas the early detection of visual stimuli and the top-down activation of the visual representations are not the sole endogenous prerequisites for triggering any types of cross-modal attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Yu Liao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| |
Collapse
|
7
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
8
|
Zhao W, Chen L, Zhou C, Luo W. Neural Correlates of Emotion Processing in Word Detection Task. Front Psychol 2018; 9:832. [PMID: 29887824 PMCID: PMC5982209 DOI: 10.3389/fpsyg.2018.00832] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 05/08/2018] [Indexed: 11/13/2022] Open
Abstract
In our previous study, we have proposed a three-stage model of emotion processing; in the current study, we investigated whether the ERP component may be different when the emotional content of stimuli is task-irrelevant. In this study, a dual-target rapid serial visual presentation (RSVP) task was used to investigate how the emotional content of words modulates the time course of neural dynamics. Participants performed the task in which affectively positive, negative, and neutral adjectives were rapidly presented while event-related potentials (ERPs) were recorded from 18 undergraduates. The N170 component was enhanced for negative words relative to positive and neutral words. This indicates that automatic processing of negative information occurred at an early perceptual processing stage. In addition, later brain potentials such as the late positive potential (LPP) were only enhanced for positive words in the 480-580-ms post-stimulus window, while a relatively large amplitude signal was elicited by positive and negative words between 580 and 680 ms. These results indicate that different types of emotional content are processed distinctly at different time windows of the LPP, which is in contrast with the results of studies on task-relevant emotional processing. More generally, these findings suggest that a negativity bias to negative words remains to be found in emotion-irrelevant tasks, and that the LPP component reflects dynamic separation of emotion valence.
Collapse
Affiliation(s)
- Wenshuang Zhao
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
| | - Liang Chen
- School of Psychology, Southwest University, Chongqing, China
| | - Chunxia Zhou
- Chongqing College of Electronic Engineering, Chongqing, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China
| |
Collapse
|
9
|
Munk AJL, Hermann A, El Shazly J, Grant P, Hennig J. The Idea Is Good, but…: Failure to Replicate Associations of Oxytocinergic Polymorphisms with Face-Inversion in the N170. PLoS One 2016; 11:e0151991. [PMID: 27015428 PMCID: PMC4807783 DOI: 10.1371/journal.pone.0151991] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Accepted: 03/07/2016] [Indexed: 01/29/2023] Open
Abstract
BACKGROUND In event-related potentials, the N170 manifests itself especially in reaction to faces. In the healthy population, face-inversion leads to stronger negative amplitudes and prolonged latencies of the N170, effects not being present in patients with autism-spectrum-disorder (ASD). ASD has frequently been associated with differences in oxytocinergic neurotransmission. This ERP-study aimed to investigate the face-inversion effect in association with oxytocinergic candidate genes. It was expected that risk-allele-carriers of the oxytocin-receptor-gene-polymorphism (rs53576) and of CD38 (rs379863) responded similar to upright and inverted faces as persons with ASD. Additionally, reactions to different facial emotional expressions were studied. As there have been difficulties with replications of those molecular genetic association studies, we aimed to replicate our findings in a second study. METHOD Seventy-two male subjects in the first-, and seventy-eight young male subjects in the replication-study conducted a face-inversion-paradigm, while recording EEG. DNA was extracted from buccal cells. RESULTS Results revealed stronger N170-amplitudes and longer latencies in reaction to inverted faces in comparison to upright ones. Furthermore, effects of emotion on N170 were evident. Those effects were present in the first and in the second study. Whereas we found molecular-genetic associations of oxytocinergic polymorphisms with the N170 in the first study, we failed to do so in the replication sample. CONCLUSION Results indicate that a deeper theoretical understanding of this research-field is needed, in order to generate possible explanations for these findings. Results, furthermore, support the hypotheses that success of reproducibility is correlated with strength of lower original p-values and larger effect sizes in the original study.
Collapse
Affiliation(s)
- Aisha J. L. Munk
- Justus Liebig University Giessen, Department of Psychology, Personality and Biological Psychology, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany
| | - Andrea Hermann
- Justus Liebig University Giessen, Department of Psychology, Psychotherapy and Systems Neuroscience, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany
| | - Jasmin El Shazly
- Justus Liebig University Giessen, Faculty of Medicine, Center for Psychiatry and Psychotherapy, Am Steg 28, 35392 Giessen, Germany
| | - Phillip Grant
- Justus Liebig University Giessen, Department of Psychology, Personality and Biological Psychology, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany
| | - Jürgen Hennig
- Justus Liebig University Giessen, Department of Psychology, Personality and Biological Psychology, Otto-Behaghel-Strasse 10F, 35394 Giessen, Germany
| |
Collapse
|
10
|
Yi S, He W, Zhan L, Qi Z, Zhu C, Luo W, Li H. Emotional noun processing: an ERP study with rapid serial visual presentation. PLoS One 2015; 10:e0118924. [PMID: 25738633 PMCID: PMC4349822 DOI: 10.1371/journal.pone.0118924] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Accepted: 01/18/2015] [Indexed: 11/18/2022] Open
Abstract
Reading is an important part of our daily life, and rapid responses to emotional words have received a great deal of research interest. Our study employed rapid serial visual presentation to detect the time course of emotional noun processing using event-related potentials. We performed a dual-task experiment, where subjects were required to judge whether a given number was odd or even, and the category into which each emotional noun fit. In terms of P1, we found that there was no negativity bias for emotional nouns. However, emotional nouns elicited larger amplitudes in the N170 component in the left hemisphere than did neutral nouns. This finding indicated that in later processing stages, emotional words can be discriminated from neutral words. Furthermore, positive, negative, and neutral words were different from each other in the late positive complex, indicating that in the third stage, even different emotions can be discerned. Thus, our results indicate that in a three-stage model the latter two stages are more stable and universal.
Collapse
Affiliation(s)
- Shengnan Yi
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Weiqi He
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Lei Zhan
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Zhengyang Qi
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Chuanlin Zhu
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Wenbo Luo
- School of Psychology, Liaoning Normal University, Dalian, China
- Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China
| | - Hong Li
- Research Centre of Brain Function and Psychological Science, Shenzhen University, Shenzhen, China
| |
Collapse
|
11
|
Wang H, Sun P, Ip C, Zhao X, Fu S. Configural and featural face processing are differently modulated by attentional resources at early stages: an event-related potential study with rapid serial visual presentation. Brain Res 2015; 1602:75-84. [PMID: 25601005 DOI: 10.1016/j.brainres.2015.01.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2014] [Revised: 01/04/2015] [Accepted: 01/10/2015] [Indexed: 11/19/2022]
Abstract
It is widely reported that face recognition relies on two dissociable mechanisms, the featural and the configural processing. However, it is unclear whether these two processing types involve different neural mechanisms and are differently modulated by attentional resources. Using the attentional blink (AB) paradigm, we aimed to investigate the effect of attentional resources on configural and featural face processing by recording event-related potentials (ERPs). The amount of attentional resources was manipulated as deficient or sufficient by presenting the second target (T2) in or out of the AB period, respectively. We found that in addition to a traditional P3 attention effect, the amplitude of N170/VPP to the T2 stimuli was also sensitive to attentional resources, suggesting that attention affects face processing at an earlier perceptual processing stage. More importantly, configural face processing elicited a larger posterior P1 compared to featural face processing, but only when the attentional resources were sufficient. In contrast, the anterior N1 was larger for configural relative to featural face processing only when the attentional resources were deficient. These results suggest that early stages of configural and featural face processing are differently modulated by attentional resources, possibly with different underlying mechanisms.
Collapse
Affiliation(s)
- Hailing Wang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Pei Sun
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Chengteng Ip
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Xin Zhao
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Shimin Fu
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
12
|
Wang W, Miao D, Zhao L. Visual MMN elicited by orientation changes of faces. J Integr Neurosci 2014; 13:485-95. [DOI: 10.1142/s0219635214500137] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
13
|
Wang W, Miao D, Zhao L. Automatic detection of orientation changes of faces versus non-face objects: A visual MMN study. Biol Psychol 2014; 100:71-8. [DOI: 10.1016/j.biopsycho.2014.05.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2014] [Revised: 04/18/2014] [Accepted: 05/13/2014] [Indexed: 10/25/2022]
|
14
|
Zhang D, He W, Wang T, Luo W, Zhu X, Gu R, Li H, Luo YJ. Three stages of emotional word processing: an ERP study with rapid serial visual presentation. Soc Cogn Affect Neurosci 2014; 9:1897-903. [PMID: 24526185 DOI: 10.1093/scan/nst188] [Citation(s) in RCA: 93] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Rapid responses to emotional words play a crucial role in social communication. This study employed event-related potentials to examine the time course of neural dynamics involved in emotional word processing. Participants performed a dual-target task in which positive, negative and neutral adjectives were rapidly presented. The early occipital P1 was found larger when elicited by negative words, indicating that the first stage of emotional word processing mainly differentiates between non-threatening and potentially threatening information. The N170 and the early posterior negativity were larger for positive and negative words, reflecting the emotional/non-emotional discrimination stage of word processing. The late positive component not only distinguished emotional words from neutral words, but also differentiated between positive and negative words. This represents the third stage of emotional word processing, the emotion separation. Present results indicated that, similar with the three-stage model of facial expression processing; the neural processing of emotional words can also be divided into three stages. These findings prompt us to believe that the nature of emotion can be analyzed by the brain independent of stimulus type, and that the three-stage scheme may be a common model for emotional information processing in the context of limited attentional resources.
Collapse
Affiliation(s)
- Dandan Zhang
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Weiqi He
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Ting Wang
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Wenbo Luo
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Xiangru Zhu
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Ruolei Gu
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Hong Li
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| | - Yue-Jia Luo
- Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen 518060, School of Psychology, Liaoning Normal University, Dalian 116029, Laboratory of Cognition and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402168, Department of Psychology, Henan University, Kaifeng 475004, and Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
| |
Collapse
|
15
|
Abstract
This study used event-related potentials to investigate the sensitivity of P1 and N170 components to human-like and animal-like makeup stimuli, which were derived from pictures of Peking opera characters. As predicted, human-like makeup stimuli elicited larger P1 and N170 amplitudes than did animal-like makeup stimuli. Interestingly, a right hemisphere advantage was observed for human-like but not for animal-like makeup stimuli. Dipole source analyses of 130-200-ms window showed that the bilateral fusiform face area may contribute to the differential sensitivity of the N170 component in response to human-like and animal-like makeup stimuli. The present study suggests that the amplitudes of both the P1 and the N170 are sensitive for the mouth component of face-like stimuli.
Collapse
|