1
|
Keha E, Naftalovich H, Shahaf A, Kalanthroff E. Control your emotions: evidence for a shared mechanism of cognitive and emotional control. Cogn Emot 2024; 38:1330-1342. [PMID: 38465905 DOI: 10.1080/02699931.2024.2326902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 02/27/2024] [Accepted: 03/01/2024] [Indexed: 03/12/2024]
Abstract
The current investigation examined the bidirectional effects of cognitive control and emotional control and the overlap between these two systems in regulating emotions. Based on recent neural and cognitive findings, we hypothesised that two control systems largely overlap as control recruited for one system (either emotional or cognitive) can be used by the other system. In two experiments, participants completed novel versions of either the Stroop task (Experiment 1) or the Flanker task (Experiment 2) in which the emotional and cognitive control systems were actively manipulated into either a high or low emotional-load condition (achieved by varying the proportions of negative-valence emotional cues) and a high and a low cognitive control condition (achieved through varying the proportion of conflict-laden trials). In both experiments, participants' performance was impaired when both emotional and cognitive control were low, but significantly and similarly improved when one of the two control mechanisms were activated - the emotional or the cognitive. In Experiment 2, performance was further improved when both systems were activated. Our results give further support for a more integrative notion of control in which the two systems (emotional and cognitive control) not only influence each other, but rather extensively overlap.
Collapse
Affiliation(s)
- Eldad Keha
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Psychology, Achva Academic College, Arugot, Israel
| | - Hadar Naftalovich
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ariel Shahaf
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Eyal Kalanthroff
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA
| |
Collapse
|
2
|
Kokash J, Rumschlag JA, Razak KA. Cortical region-specific recovery of auditory temporal processing following noise-induced hearing loss. Neuroscience 2024; 560:143-157. [PMID: 39284433 DOI: 10.1016/j.neuroscience.2024.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 08/21/2024] [Accepted: 09/04/2024] [Indexed: 09/30/2024]
Abstract
Noise-induced hearing loss (NIHL) studies have focused on the lemniscal auditory pathway, but little is known about how NIHL impacts different cortical regions. Here we compared response recovery trajectories in the auditory and frontal cortices (AC, FC) of mice following NIHL. We recorded EEG responses from awake mice (male n = 15, female n = 14) before and following NIHL (longitudinal design) to quantify event related potentials and gap-in-noise temporal processing. Hearing loss was verified by measuring the auditory brainstem response (ABR) before and at 1-, 10-, 23-, and 45-days after noise-exposure. Resting EEG, event related potentials (ERP) and auditory steady state responses (ASSR) were recorded at the same time-points after NIHL. The inter-trial phase coherence (ITPC) of the ASSR was measured to quantify the ability of AC and FC to synchronize responses to short gaps embedded in noise. Despite the absence of click-evoked ABRs up to 90 dB SPL and up to 45-days post-exposure, ERPs from the AC and FC showed full recovery in ∼ 50 % of the mice to pre-NIHL levels in both AC and FC. The ASSR ITPC was reduced following NIHL in AC and FC in all the mice on day 1 after NIHL. The AC showed full recovery of ITPC over 45-days. Despite ERP amplitude recovery, the FC does not show recovery of ASSR ITPC. These results indicate post-NIHL plasticity with similar response amplitude recovery across AC and FC, but cortical region-specific trajectories in temporal processing recovery.
Collapse
Affiliation(s)
- J Kokash
- Graduate Neuroscience Program, University of California, Riverside, United States
| | - J A Rumschlag
- Graduate Neuroscience Program, University of California, Riverside, United States
| | - K A Razak
- Graduate Neuroscience Program, University of California, Riverside, United States; Department of Psychology, University of California, Riverside, United States.
| |
Collapse
|
3
|
Miron S, Kalanthroff E. Negative emotional cues improve free recall of positive and neutral words in unmedicated patients with major depressive disorder. Cogn Behav Ther 2024; 53:409-422. [PMID: 38477620 DOI: 10.1080/16506073.2024.2328288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/04/2024] [Indexed: 03/14/2024]
Abstract
Individuals with major depressive disorder (MDD) exhibit attentional biases toward negative, mood-congruent stimuli while filtering out positive and neutral stimuli, resulting in memory biases to negative content. While attentional and memory biases in MDD have been extensively studied, the underlying mechanisms of these biases remain unclear. The current study investigates a novel model proposing that exposure to negative emotional cues triggers a transient "attentional window" in individuals with MDD, leading to heightened and deeper cognitive processing of any subsequent information, irrespective of its content. Forty-two unmedicated patients with MDD and no comorbid disorder and 41 healthy controls, completed six blocks of the emotional memory task, in which they were asked to watch a short video (negative, neutral, or positive valence) followed by a memory test on a list of neutral or positive valance words. Results indicated that participants with MDD, but not healthy controls, had better recall performance after a negative video compared to after neutral or positive videos, and that this effect occurred for both neutral and positive word-lists. These findings provide evidence that participants with MDD engage in deeper information processing following exposure to negative emotional stimuli. Potential clinical implications are discussed.
Collapse
Affiliation(s)
- Sapir Miron
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Eyal Kalanthroff
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
4
|
K A, Prasad S, Chakrabarty M. Trait anxiety modulates the detection sensitivity of negative affect in speech: an online pilot study. Front Behav Neurosci 2023; 17:1240043. [PMID: 37744950 PMCID: PMC10512416 DOI: 10.3389/fnbeh.2023.1240043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Acoustic perception of emotions in speech is relevant for humans to navigate the social environment optimally. While sensory perception is known to be influenced by ambient noise, and bodily internal states (e.g., emotional arousal and anxiety), their relationship to human auditory perception is relatively less understood. In a supervised, online pilot experiment sans the artificially controlled laboratory environment, we asked if the detection sensitivity of emotions conveyed by human speech-in-noise (acoustic signals) varies between individuals with relatively lower and higher levels of subclinical trait-anxiety, respectively. In a task, participants (n = 28) accurately discriminated the target emotion conveyed by the temporally unpredictable acoustic signals (signal to noise ratio = 10 dB), which were manipulated at four levels (Happy, Neutral, Fear, and Disgust). We calculated the empirical area under the curve (a measure of acoustic signal detection sensitivity) based on signal detection theory to answer our questions. A subset of individuals with High trait-anxiety relative to Low in the above sample showed significantly lower detection sensitivities to acoustic signals of negative emotions - Disgust and Fear and significantly lower detection sensitivities to acoustic signals when averaged across all emotions. The results from this pilot study with a small but statistically relevant sample size suggest that trait-anxiety levels influence the overall acoustic detection of speech-in-noise, especially those conveying threatening/negative affect. The findings are relevant for future research on acoustic perception anomalies underlying affective traits and disorders.
Collapse
Affiliation(s)
- Achyuthanand K
- Department of Computational Biology, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Saurabh Prasad
- Department of Computer Science and Engineering, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Mrinmoy Chakrabarty
- Department of Social Sciences and Humanities, Indraprastha Institute of Information Technology Delhi, New Delhi, India
- Centre for Design and New Media, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| |
Collapse
|
5
|
Zhang H, Xie J, Xiao Y, Cui G, Xu G, Tao Q, Gebrekidan YY, Yang Y, Ren Z, Li M. Steady-state auditory motion based potentials evoked by intermittent periodic virtual sound source and the effect of auditory noise on EEG enhancement. Hear Res 2023; 428:108670. [PMID: 36563411 DOI: 10.1016/j.heares.2022.108670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Hearing is one of the most important human perception forms, and humans can capture the movement of sound in complex environments. On the basis of this phenomenon, this study explored the possibility of eliciting a steady-state brain response in an intermittent periodic motion sound source. In this study, a novel discrete continuous and orderly change of sound source positions stimulation paradigm was designed based on virtual sound using head-related transfer functions (HRTFs). And then the auditory motion stimulation paradigms with different noise levels were designed by changing the signal-to-noise ratio (SNR). The characteristics of brain response and the effects of different noises on brain response were studied by analyzing electroencephalogram (EEG) signals evoked by the proposed stimulation. Experimental results showed that the proposed paradigm could elicit a novel steady-state auditory evoked potential (AEP), i.e., steady-state motion auditory evoked potential (SSMAEP). And moderate noise could enhance SSMAEP amplitude and corresponding brain connectivity. This study enriches the types of AEPs and provides insights into the mechanism of brain processing of motion sound sources and the impact of noise on brain processing.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China.
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China
| | | | - Yuzhe Yang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhiyuan Ren
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|