1
|
Mota-Rojas D, Whittaker AL, Strappini AC, Orihuela A, Domínguez-Oliva A, Mora-Medina P, Álvarez-Macías A, Hernández-Avalos I, Olmos-Hernández A, Reyes-Sotelo B, Grandin T. Human animal relationships in Bos indicus cattle breeds addressed from a Five Domains welfare framework. Front Vet Sci 2024; 11:1456120. [PMID: 39290508 PMCID: PMC11405345 DOI: 10.3389/fvets.2024.1456120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 08/20/2024] [Indexed: 09/19/2024] Open
Abstract
The present review has two objectives, the first is to investigate the differences in temperament between Bos indicus and Bos taurus breeds and determining the effects on production due to positive treatment and to compare this with negative HAR, by using the Five Domain Model as framework. The second objective is to discuss potential strategies to achieve better HAR when working with Bos indicus cattle. Bos indicus are more reactive and temperamental than Bos taurus cattle. When human animal relationships (HAR) are evaluated, Bos indicus cattle may react with greater intensity. They may be more likely to develop a negative emotional state, especially in extensively raised Bos indicus cattle that are handled only a few times each year. Bos indicus cattle can have positive emotional states when they have frequent positive interactions with people. Interactions with people, both positive and negative, would be in the fourth Domain of the Five Domains model. Cattle that are more reactive during handling may also have lower weight gain, even when they have abundant feed. This would be in the first Domain of Nutrition. When cattle are handled in races and corrals, injuries may be more likely to occur. Injuries and bruises would be in the third Domain of Health. Injuries could be caused by either poor handling practices by people or poor handling facilities. Yelling or electric prod use would be examples of poor HAR. Second Environmental Domain issues may be broken facilities or slick, slippery floors that are associated with falls.
Collapse
Affiliation(s)
- Daniel Mota-Rojas
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City, Mexico
| | - Alexandra L Whittaker
- School of Animal and Veterinary Sciences, Roseworthy Campus, University of Adelaide, Roseworthy, SA, Australia
| | - Ana C Strappini
- Animal Health & Welfare, Wageningen Livestock Research, Wageningen University & Research, Wageningen, Netherlands
| | - Agustín Orihuela
- Facultad de Ciencias Agropecuarias, Universidad Autónoma del Estado de Morelos, Cuernavaca, Mexico
| | - Adriana Domínguez-Oliva
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City, Mexico
| | - Patricia Mora-Medina
- Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México (UNAM), Cuautitlán, Mexico
| | - Adolfo Álvarez-Macías
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City, Mexico
| | - Ismael Hernández-Avalos
- Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México (UNAM), Cuautitlán, Mexico
| | - Adriana Olmos-Hernández
- Division of Biotechnology-Bioterio and Experimental Surgery, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra (INR-LGII), Mexico City, Mexico
| | - Brenda Reyes-Sotelo
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City, Mexico
| | - Temple Grandin
- Department of Animal Science, Colorado State University, Fort Collins, CO, United States
| |
Collapse
|
2
|
Lv X, Wang Y, Zhang Y, Ma S, Liu J, Ye K, Wu Y, Voon V, Sun B. Auditory entrainment coordinates cortical-BNST-NAc triple time locking to alleviate the depressive disorder. Cell Rep 2024; 43:114474. [PMID: 39127041 DOI: 10.1016/j.celrep.2024.114474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 04/15/2024] [Accepted: 06/24/2024] [Indexed: 08/12/2024] Open
Abstract
Listening to music is a promising and accessible intervention for alleviating symptoms of major depressive disorder. However, the neural mechanisms underlying its antidepressant effects remain unclear. In this study on patients with depression, we used auditory entrainment to evaluate intracranial recordings in the bed nucleus of the stria terminalis (BNST) and nucleus accumbens (NAc), along with temporal scalp electroencephalogram (EEG). We highlight music-induced synchronization across this circuit. The synchronization initiates with temporal theta oscillations, subsequently inducing local gamma oscillations in the BNST-NAc circuit. Critically, the incorporated external entrainment induced a modulatory effect from the auditory cortex to the BNST-NAc circuit, activating the antidepressant response and highlighting the causal role of physiological entrainment in enhancing the antidepressant response. Our study explores the pivotal role of the auditory cortex and proposes a neural oscillation triple time-locking model, emphasizing the capacity of the auditory cortex to access the BNST-NAc circuit.
Collapse
Affiliation(s)
- Xin Lv
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yuhan Wang
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yingying Zhang
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Neural and Intelligence Engineering Centre, Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Shuo Ma
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jie Liu
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Kuanghao Ye
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yunhao Wu
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Valerie Voon
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Neural and Intelligence Engineering Centre, Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Department of Psychiatry, Addenbrookes Hospital, University of Cambridge, CB2 0QQ Cambridge, UK.
| | - Bomin Sun
- Center of Functional Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Neurosurgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
3
|
Trevor C, Frühholz S. Music as an Evolved Tool for Socio-Affective Fiction. EMOTION REVIEW 2024; 16:180-194. [PMID: 39101012 PMCID: PMC11294008 DOI: 10.1177/17540739241259562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
The question of why music evolved has been contemplated and debated for centuries across multiple disciplines. While many theories have been posited, they still do not fully answer the question of why humans began making music. Adding to the effort to solve this mystery, we propose the socio-affective fiction (SAF) hypothesis. Humans have a unique biological need for emotion regulation strengthening. Simulated emotional situations, like dreams, can help address that need. Immersion is key for such simulations to successfully exercise people's emotions. Therefore, we propose that music evolved as a signal for SAF to increase the immersive potential of storytelling and thereby better exercise people's emotions. In this review, we outline the SAF hypothesis and present cross-disciplinary evidence.
Collapse
Affiliation(s)
- Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zürich, Switzerland
- Music Department, University of Birmingham, Birmingham, UK
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zürich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zürich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
4
|
Vidal M, Onderdijk KE, Aguilera AM, Six J, Maes PJ, Fritz TH, Leman M. Cholinergic-related pupil activity reflects level of emotionality during motor performance. Eur J Neurosci 2024; 59:2193-2207. [PMID: 37118877 DOI: 10.1111/ejn.15998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 04/30/2023]
Abstract
Pupil size covaries with the diffusion rate of the cholinergic and noradrenergic neurons throughout the brain, which are essential to arousal. Recent findings suggest that slow pupil fluctuations during locomotion are an index of sustained activity in cholinergic axons, whereas phasic dilations are related to the activity of noradrenergic axons. Here, we investigated movement induced arousal (i.e., by singing and swaying to music), hypothesising that actively engaging in musical behaviour will provoke stronger emotional engagement in participants and lead to different qualitative patterns of tonic and phasic pupil activity. A challenge in the analysis of pupil data is the turbulent behaviour of pupil diameter due to exogenous ocular activity commonly encountered during motor tasks and the high variability typically found between individuals. To address this, we developed an algorithm that adaptively estimates and removes pupil responses to ocular events, as well as a functional data methodology, derived from Pfaffs' generalised arousal, that provides a new statistical dimension on how pupil data can be interpreted according to putative neuromodulatory signalling. We found that actively engaging in singing enhanced slow cholinergic-related pupil dilations and having the opportunity to move your body while performing amplified the effect of singing on pupil activity. Phasic pupil oscillations during motor execution attenuated in time, which is often interpreted as a measure of sense of agency over movement.
Collapse
Affiliation(s)
- Marc Vidal
- IPEM, Ghent University, Ghent, Belgium
- Department of Statistics and Operations Research, Institute of Mathematics, University of Granada, Granada, Spain
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | - Ana M Aguilera
- Department of Statistics and Operations Research, Institute of Mathematics, University of Granada, Granada, Spain
| | - Joren Six
- IPEM, Ghent University, Ghent, Belgium
| | | | - Thomas Hans Fritz
- IPEM, Ghent University, Ghent, Belgium
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | |
Collapse
|
5
|
Hohneck A, Reyser C, Usselmann R, Heinemann L, Weingaertner S, Reckling H, Schumacher G, Burkholder I, Merx K, Hofmann WK, Hofheinz RD. Hemodynamic and Stress Response After Sound Intervention with Different Headphone Systems: A Double-Blind Randomized Study in Healthy Volunteers Working in the Health Care Sector. JOURNAL OF INTEGRATIVE AND COMPLEMENTARY MEDICINE 2024; 30:360-370. [PMID: 37819750 DOI: 10.1089/jicm.2022.0757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
Objectives: Two headphone systems using different sound systems were compared to investigate the effects of a sound intervention on cardiovascular parameters, indicators of stress, and subjective feelings. Methods: One hundred volunteers who work in the health care sector reporting elevated workplace-related stress were enrolled and randomized to a 12-min sound intervention (classical music) with either conventional headphones ("MEZE 99 Classic") or with the same-but internally modified-headphone (called "Lautsaenger"). Cardiovascular parameters were measured with the VascAssist2.0, both before and after sound interventions. In addition, participants were asked to complete questionnaires on burnout risk and emotions/stress. Results: The study population consisted mainly of female participants (n = 83), with the majority being students (42%). Median age was 32.5 years (range 21-71). In terms of cardiovascular parameters, a significant reduction in aortic pulse wave velocity, as measure of arterial stiffness, and heart rate was observed within both treatment arms. Both systolic blood pressure and arterial flow resistance were reduced by sound intervention, while these effects were only documented with Lautsaenger. Treatment groups were comparable in terms of subjective feedback by participants: a significant increase in emotional wellbeing was achieved with both headphone systems. Conclusions: A single short-term sound intervention seems to be able to achieve objective cardiovascular improvements in healthy volunteers reporting subjective symptoms of workplace-related stress, using two different headphone systems. Moreover, significant emotional improvement was reported within both arms. Trial Registration: ISRCTN registry 70947363, date of registration August 13, 2021.
Collapse
Affiliation(s)
- Anna Hohneck
- Department of Cardiology, Angiology, Hemostaseology and Medical Intensive Care, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- European Center for AngioScience (ECAS) and German Center for Cardiovascular Research (DZHK) Partner Site Heidelberg/Mannheim, Mannheim, Germany
| | - Christina Reyser
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Rimma Usselmann
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Lara Heinemann
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Simone Weingaertner
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Hardy Reckling
- Corporate Health Management, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | | | - Iris Burkholder
- Department of Nursing and Health, University of Applied Sciences of the Saarland, Saarbruecken, Germany
| | - Kirsten Merx
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Wolf-Karsten Hofmann
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Ralf-Dieter Hofheinz
- Department of Hematology and Oncology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
6
|
Ren Y, Brown TI. Beyond the ears: A review exploring the interconnected brain behind the hierarchical memory of music. Psychon Bull Rev 2024; 31:507-530. [PMID: 37723336 DOI: 10.3758/s13423-023-02376-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2023] [Indexed: 09/20/2023]
Abstract
Music is a ubiquitous element of daily life. Understanding how music memory is represented and expressed in the brain is key to understanding how music can influence human daily cognitive tasks. Current music-memory literature is built on data from very heterogeneous tasks for measuring memory, and the neural correlates appear to differ depending on different forms of memory function targeted. Such heterogeneity leaves many exceptions and conflicts in the data underexplained (e.g., hippocampal involvement in music memory is debated). This review provides an overview of existing neuroimaging results from music-memory related studies and concludes that although music is a special class of event in our lives, the memory systems behind it do in fact share neural mechanisms with memories from other modalities. We suggest that dividing music memory into different levels of a hierarchy (structural level and semantic level) helps understand overlap and divergence in neural networks involved. This is grounded in the fact that memorizing a piece of music recruits brain clusters that separately support functions including-but not limited to-syntax storage and retrieval, temporal processing, prediction versus reality comparison, stimulus feature integration, personal memory associations, and emotion perception. The cross-talk between frontal-parietal music structural processing centers and the subcortical emotion and context encoding areas explains why music is not only so easily memorable but can also serve as strong contextual information for encoding and retrieving nonmusic information in our lives.
Collapse
Affiliation(s)
- Yiren Ren
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA.
| | - Thackery I Brown
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA
| |
Collapse
|
7
|
Trost W, Trevor C, Fernandez N, Steiner F, Frühholz S. Live music stimulates the affective brain and emotionally entrains listeners in real time. Proc Natl Acad Sci U S A 2024; 121:e2316306121. [PMID: 38408255 DOI: 10.1073/pnas.2316306121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/18/2024] [Indexed: 02/28/2024] Open
Abstract
Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.
Collapse
Affiliation(s)
- Wiebke Trost
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Natalia Fernandez
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich 8057, Switzerland
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
8
|
Whitehead JC, Spiousas I, Armony JL. Individual differences in the evaluation of ambiguous visual and auditory threat-related expressions. Eur J Neurosci 2024; 59:370-393. [PMID: 38185821 DOI: 10.1111/ejn.16220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/29/2023] [Accepted: 11/22/2023] [Indexed: 01/09/2024]
Abstract
This study investigated the neural correlates of the judgement of auditory and visual ambiguous threat-related information, and the influence of state anxiety on this process. Healthy subjects were scanned using a fast, high-resolution functional magnetic resonance imaging (fMRI) multiband sequence while they performed a two-alternative forced-choice emotion judgement task on faces and vocal utterances conveying explicit anger or fear, as well as ambiguous ones. Critically, the latter was specific to each subject, obtained through a morphing procedure and selected prior to scanning following a perceptual decision-making task. Behavioural results confirmed a greater task-difficulty for subject-specific ambiguous stimuli and also revealed a judgement bias for visual fear, and, to a lesser extent, for auditory anger. Imaging results showed increased activity in regions of the salience and frontoparietal control networks (FPCNs) and deactivation in areas of the default mode network for ambiguous, relative to explicit, expressions. In contrast, the right amygdala (AMG) responded more strongly to explicit stimuli. Interestingly, its response to the same ambiguous stimulus depended on the subjective judgement of the expression. Finally, we found that behavioural and neural differences between ambiguous and explicit expressions decreased as a function of state anxiety scores. Taken together, our results show that behavioural and brain responses to emotional expressions are determined not only by emotional clarity but also modality and the subjects' subjective perception of the emotion expressed, and that some of these responses are modulated by state anxiety levels.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
| | - Ignacio Spiousas
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
| | - Jorge L Armony
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
- Department of Psychiatry, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
9
|
Ma G, Ma X. Music Intervention for older adults: Evidence Map of Systematic Reviews. Medicine (Baltimore) 2023; 102:e36016. [PMID: 38050267 PMCID: PMC10695625 DOI: 10.1097/md.0000000000036016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 10/18/2023] [Indexed: 12/06/2023] Open
Abstract
BACKGROUND With the increasing aging population, the health problems of the elderly have received increasing attention. As a non-pharmacological interventions, music intervention has been widely used in clinical practice to improve the physical and mental health of the elderly. This article aims to provide a comprehensive review of existing systematic reviews on the health effects of music interventions for older adults in clinical practice. METHODS The study utilized the evidence map methodology, which involved identifying all relevant systematic reviews, meta-analysis from 7 electronic databases from their inception to November 2022. The studies were analyzed using AMSTAR 2. RESULTS The researchers identified 67 studies, with the majority published in the past 5 years. The effects of music interventions were categorized into 4 groups of health outcomes: positive (58 results), potentially positive (4 results), inconclusive (2 results), and no effect (3 results). The health outcomes were further classified into 5 groups: psychological well-being, cognitive functioning, physiological responses, quality of life, and overall well-being. CONCLUSIONS The study revealed that music interventions for older adults can have positive or potentially positive effects on health outcomes, encompassing psychological well-being, cognitive functioning, physiological responses, quality of life, and overall well-being. However, some studies yielded inconclusive or no effect. The study offers valuable insights for healthcare professionals and serves as a visual resource to access evidence-based information on the use of music interventions in promoting health and addressing various conditions in older adults.
Collapse
Affiliation(s)
- Guiyue Ma
- School of Nursing, Anhui University of Chinese Medicine, Hefei, China
- School of Nursing, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xiaoqin Ma
- School of Nursing, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
10
|
Lin TH, Liao YC, Tam KW, Chan L, Hsu TH. Effects of music therapy on cognition, quality of life, and neuropsychiatric symptoms of patients with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Res 2023; 329:115498. [PMID: 37783097 DOI: 10.1016/j.psychres.2023.115498] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/24/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
Dementia is a major cause of disability and dependency. Pharmacological interventions are commonly provided to patients with dementia to delay the deterioration of cognitive functions but cannot alter the course of disease. Nonpharmacological interventions are now attracting increasing scholarly interest. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement, we aim to assess the effectiveness of music-based therapies on the cognition, quality of life (QoL), and neuropsychiatric symptoms of patients with dementia through a systematic review and meta-analysis of randomized controlled trials (RCTs). The PubMed, Embase, and Cochrane databases were searched for reports of RCTs examining the effectiveness of music-based therapies for dementia published as of April 2023. A total of 674 articles were screened, and 22 trials from 21 studies (1780 patients) met the eligibility criteria. In 15 trials, music-based therapies significantly improved the cognition of patients with dementia compared with non-music therapies. In 11 trials, music-based therapies also significantly improved the QoL of patients with dementia compared with non-music therapies. In six trials, music-based therapies significantly improved patients' neuropsychiatric symptoms compared with non-music therapies. In conclusion, music-based therapy is recognized as a safe and effective alternative approach for patients with dementia.
Collapse
Affiliation(s)
- Ting-Han Lin
- school of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan
| | - Yin-Chun Liao
- Center for General Education, Chung Shan Medical University, Taichung City, Taiwan
| | - Ka-Wai Tam
- Division of General Surgery, Department of Surgery, Taipei Medical University-Shuang-Ho Hospital, New Taipei City, Taiwan; Department of Surgery, School of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan; Cochrane Taiwan, Taipei Medical University, Taipei City, Taiwan; Center For Evidence-Based Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan
| | - Lung Chan
- Department of Neurology, Taipei Medical University-Shuang-Ho Hospital, New Taipei City, Taiwan; Department of Neurology, School of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan
| | - Tzu-Herng Hsu
- Department of Physical Medicine and Rehabilitation, Taipei Medical University-Shuang-Ho Hospital, New Taipei City, Taiwan; Department of Physical Medicine and Rehabilitation, School of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan.
| |
Collapse
|
11
|
Belden A, Quinci MA, Geddes M, Donovan NJ, Hanser SB, Loui P. Functional Organization of Auditory and Reward Systems in Aging. J Cogn Neurosci 2023; 35:1570-1592. [PMID: 37432735 PMCID: PMC10513766 DOI: 10.1162/jocn_a_02028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
The intrinsic organization of functional brain networks is known to change with age, and is affected by perceptual input and task conditions. Here, we compare functional activity and connectivity during music listening and rest between younger (n = 24) and older (n = 24) adults, using whole-brain regression, seed-based connectivity, and ROI-ROI connectivity analyses. As expected, activity and connectivity of auditory and reward networks scaled with liking during music listening in both groups. Younger adults show higher within-network connectivity of auditory and reward regions as compared with older adults, both at rest and during music listening, but this age-related difference at rest was reduced during music listening, especially in individuals who self-report high musical reward. Furthermore, younger adults showed higher functional connectivity between auditory network and medial prefrontal cortex that was specific to music listening, whereas older adults showed a more globally diffuse pattern of connectivity, including higher connectivity between auditory regions and bilateral lingual and inferior frontal gyri. Finally, connectivity between auditory and reward regions was higher when listening to music selected by the participant. These results highlight the roles of aging and reward sensitivity on auditory and reward networks. Results may inform the design of music-based interventions for older adults and improve our understanding of functional network dynamics of the brain at rest and during a cognitively engaging task.
Collapse
Affiliation(s)
| | | | | | - Nancy J Donovan
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA
| | | | | |
Collapse
|
12
|
Rafferty G, Brar G, Petrut M, Meagher D, O'Connell H, St John-Smith P. Banging the drum: evolutionary and cultural origins of music and its implications for psychiatry. BJPsych Bull 2023; 47:251-254. [PMID: 37313980 PMCID: PMC10764840 DOI: 10.1192/bjb.2023.44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 06/15/2023] Open
Abstract
SUMMARY There is growing interest in music-based therapies for mental/behavioural disorders. We begin by reviewing the evolutionary and cultural origins of music, proceeding then to discuss the principles of evolutionary psychiatry, itself a growing a field, and how it may apply to music. Finally we offer some implications for the role of music and music-based therapies in clinical practice.
Collapse
|
13
|
Qiu X, Wang S, Wang R, Zhang Y, Huang L. A multi-head residual connection GCN for EEG emotion recognition. Comput Biol Med 2023; 163:107126. [PMID: 37327757 DOI: 10.1016/j.compbiomed.2023.107126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/22/2023] [Accepted: 06/01/2023] [Indexed: 06/18/2023]
Abstract
Electroencephalography (EEG) emotion recognition is a crucial aspect of human-computer interaction. However, conventional neural networks have limitations in extracting profound EEG emotional features. This paper introduces a novel multi-head residual graph convolutional neural network (MRGCN) model that incorporates complex brain networks and graph convolution networks. The decomposition of multi-band differential entropy (DE) features exposes the temporal intricacy of emotion-linked brain activity, and the combination of short and long-distance brain networks can explore complex topological characteristics. Moreover, the residual-based architecture not only enhances performance but also augments classification stability across subjects. The visualization of brain network connectivity offers a practical technique for investigating emotional regulation mechanisms. The MRGCN model exhibits average classification accuracies of 95.8% and 98.9% for the DEAP and SEED datasets, respectively, highlighting its excellent performance and robustness.
Collapse
Affiliation(s)
- Xiangkai Qiu
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Shenglin Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Ruqing Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yiling Zhang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Liya Huang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China; National and Local Joint Engineering Laboratory of RF Integration and Micro-Assembly Technology, Nanjing, China.
| |
Collapse
|
14
|
Voytenko S, Shanbhag S, Wenstrup J, Galazyuk A. Intracellular recordings reveal integrative function of the basolateral amygdala in acoustic communication. J Neurophysiol 2023; 129:1334-1343. [PMID: 37098994 PMCID: PMC10202475 DOI: 10.1152/jn.00103.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/04/2023] [Accepted: 04/23/2023] [Indexed: 04/27/2023] Open
Abstract
The amygdala, a brain center of emotional expression, contributes to appropriate behavior responses during acoustic communication. In support of that role, the basolateral amygdala (BLA) analyzes the meaning of vocalizations through the integration of multiple acoustic inputs with information from other senses and an animal's internal state. The mechanisms underlying this integration are poorly understood. This study focuses on the integration of vocalization-related inputs to the BLA from auditory centers during this processing. We used intracellular recordings of BLA neurons in unanesthetized big brown bats that rely heavily on a complex vocal repertoire during social interactions. Postsynaptic and spiking responses of BLA neurons were recorded to three vocal sequences that are closely related to distinct behaviors (appeasement, low-level aggression, and high-level aggression) and have different emotional valence. Our novel findings are that most BLA neurons showed postsynaptic responses to one or more vocalizations (31 of 46) but that many fewer neurons showed spiking responses (8 of 46). The spiking responses were more selective than postsynaptic potential (PSP) responses. Furthermore, vocal stimuli associated with either positive or negative valence were similarly effective in eliciting excitatory postsynaptic potentials (EPSPs), inhibitory postsynaptic potentials (IPSPs), and spiking responses. This indicates that BLA neurons process both positive- and negative-valence vocal stimuli. The greater selectivity of spiking responses than PSP responses suggests an integrative role for processing within the BLA to enhance response specificity in acoustic communication.NEW & NOTEWORTHY The amygdala plays an important role in social communication by sound, but little is known about how it integrates diverse auditory inputs to form selective responses to social vocalizations. We show that BLA neurons receive inputs that are responsive to both negative- and positive-affect vocalizations but their spiking outputs are fewer and highly selective for vocalization type. Our work demonstrates that BLA neurons perform an integrative function in shaping appropriate behavioral responses to social vocalizations.
Collapse
Affiliation(s)
- Sergiy Voytenko
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
| | - Sharad Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| | - Jeffrey Wenstrup
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| | - Alexander Galazyuk
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| |
Collapse
|
15
|
Belden A, Quinci MA, Geddes M, Donovan NJ, Hanser SB, Loui P. Functional Organization of Auditory and Reward Systems in Aging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.01.522417. [PMID: 36711696 PMCID: PMC9881869 DOI: 10.1101/2023.01.01.522417] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The intrinsic organization of functional brain networks is known to change with age, and is affected by perceptual input and task conditions. Here, we compare functional activity and connectivity during music listening and rest between younger (N=24) and older (N=24) adults, using whole brain regression, seed-based connectivity, and ROI-ROI connectivity analyses. As expected, activity and connectivity of auditory and reward networks scaled with liking during music listening in both groups. Younger adults show higher within-network connectivity of auditory and reward regions as compared to older adults, both at rest and during music listening, but this age-related difference at rest was reduced during music listening, especially in individuals who self-report high musical reward. Furthermore, younger adults showed higher functional connectivity between auditory network and medial prefrontal cortex (mPFC) that was specific to music listening, whereas older adults showed a more globally diffuse pattern of connectivity, including higher connectivity between auditory regions and bilateral lingual and inferior frontal gyri. Finally, connectivity between auditory and reward regions was higher when listening to music selected by the participant. These results highlight the roles of aging and reward sensitivity on auditory and reward networks. Results may inform the design of music- based interventions for older adults, and improve our understanding of functional network dynamics of the brain at rest and during a cognitively engaging task.
Collapse
|
16
|
Witten E, Ryynanen J, Wisdom S, Tipp C, Chan SWY. Effects of soothing images and soothing sounds on mood and well-being. BRITISH JOURNAL OF CLINICAL PSYCHOLOGY 2023; 62:158-179. [PMID: 36342851 DOI: 10.1111/bjc.12400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/22/2022] [Accepted: 09/26/2022] [Indexed: 11/09/2022]
Abstract
OBJECTIVES Mental health problems are increasing at an alarming rate, calling for the need for more cost-effective and easily accessible interventions. Visual images and sounds depicting nature have been found to have positive effects on individuals' mood and well-being; however, the combined effects of images and sounds have been scarcely investigated. This study therefore aimed to compare the mood effects of viewing nature-related soothing images versus listening to soothing sounds versus a combination of both. METHODS In this study, 149 participants aged 18-83 years old (M = 35.88, SD = 15.63; 72.5% female, male 26.8%, .7% transgender) were randomised into three intervention conditions: images only, sounds only or combined (images and sounds). Baseline depressive and anxiety symptoms were indexed, and four outcome variables (positive affect, negative affect, serenity affect and depressive mood states) were measured pre- and post-intervention. RESULTS Findings showed that all participants, regardless of group, reported a decrease in negative affect, positive affect and depressive mood as well as an increase in serenity affect (including feelings of soothe). However, there were no group differences. Exploratory analyses found that individuals with higher levels of depressive and anxiety symptoms at baseline experienced greater reduction in negative affect and depressive mood state, as well as a larger increase in serenity affect. CONCLUSIONS These findings therefore provide preliminary evidence that, upon further research and development, images and sounds depicting nature can potentially be developed for use as an effective tool to improve mood and well-being.
Collapse
|
17
|
Osawa SI, Suzuki K, Asano E, Ukishiro K, Agari D, Kakinuma K, Kochi R, Jin K, Nakasato N, Tominaga T. Causal Involvement of Medial Inferior Frontal Gyrus of Non-dominant Hemisphere in Higher Order Auditory Perception: A single case study. Cortex 2023; 163:57-65. [PMID: 37060887 DOI: 10.1016/j.cortex.2023.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 10/12/2022] [Accepted: 02/13/2023] [Indexed: 03/31/2023]
Abstract
The medial side of the operculum is invisible from the lateral surface of cerebral cortex, and its functions remain largely unexplored using direct evidence. Non-invasive and invasive studies have proved functions on peri-sylvian area including the inferior frontal gyrus (IFG) and superior temporal gyrus within the language-dominant hemisphere for semantic processing during verbal communication. However, within the non-dominant hemisphere, there was less evidence of its functions except for pitch or prosody processing. Here we add direct evidence for the functions of the non-dominant hemisphere, the causal involvement of the medial IFG for subjective auditory perception, which is affected by the context of the condition, regarded as a contribution in higher order auditory perception. The phenomenon was clearly distinguished from absolute and invariant pitch perception which is regarded as lower order auditory perception. Electrical stimulation of the medial surface of pars triangularis of IFG in non-dominant hemisphere via depth electrode in an epilepsy patient rapidly and reproducibly elicited perception of pitch changes of auditory input. Pitches were perceived as either higher or lower than those given without stimulation and there was no selectivity for sound type. The patient perceived sounds as higher when she had greater control over the situation when her eyes were open and there were self-cues, and as lower when her eyes were closed and there were investigator-cues. Time-frequency analysis of electrocorticography signals during auditory naming demonstrated medial IFG activation, characterized by low-gamma band augmentation during her own vocal response. The overall evidence provides a neural substrate for altered perception of other vocal tones according to the condition context.
Collapse
|
18
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Cheng L, Chiu Y, Lin Y, Li W, Hong T, Yang C, Shih C, Yeh T, Tseng WI, Yu H, Hsieh J, Chen L. Long-term musical training induces white matter plasticity in emotion and language networks. Hum Brain Mapp 2022; 44:5-17. [PMID: 36005832 PMCID: PMC9783470 DOI: 10.1002/hbm.26054] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 07/02/2022] [Accepted: 07/15/2022] [Indexed: 02/05/2023] Open
Abstract
Numerous studies have reported that long-term musical training can affect brain functionality and induce structural alterations in the brain. Singing is a form of vocal musical expression with an unparalleled capacity for communicating emotion; however, there has been relatively little research on neuroplasticity at the network level in vocalists (i.e., noninstrumental musicians). Our objective in this study was to elucidate changes in the neural network architecture following long-term training in the musical arts. We employed a framework based on graph theory to depict the connectivity and efficiency of structural networks in the brain, based on diffusion-weighted images obtained from 35 vocalists, 27 pianists, and 33 nonmusicians. Our results revealed that musical training (both voice and piano) could enhance connectivity among emotion-related regions of the brain, such as the amygdala. We also discovered that voice training reshaped the architecture of experience-dependent networks, such as those involved in vocal motor control, sensory feedback, and language processing. It appears that vocal-related changes in areas such as the insula, paracentral lobule, supramarginal gyrus, and putamen are associated with functional segregation, multisensory integration, and enhanced network interconnectivity. These results suggest that long-term musical training can strengthen or prune white matter connectivity networks in an experience-dependent manner.
Collapse
Affiliation(s)
- Li‐Kai Cheng
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Yu‐Hsien Chiu
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Ying‐Chia Lin
- Center for Advanced Imaging Innovation and Research (CAIR)NYU Grossman School of MedicineNew YorkNew YorkUSA,Center for Biomedical Imaging, Department of RadiologyNYU Grossman School of MedicineNew YorkNew YorkUSA
| | - Wei‐Chi Li
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Tzu‐Yi Hong
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Ching‐Ju Yang
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Chung‐Heng Shih
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan
| | - Tzu‐Chen Yeh
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Department of RadiologyTaipei Veterans General HospitalTaipeiTaiwan
| | - Wen‐Yih Isaac Tseng
- Institute of Medical Device and ImagingNational Taiwan University College of MedicineTaipeiTaiwan
| | - Hsin‐Yen Yu
- Graduate Institute of Arts and Humanities EducationTaipei National University of the ArtsTaipeiTaiwan
| | - Jen‐Chuen Hsieh
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan,Brain Research CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Department of Biological Science and Technology, College of Biological Science and TechnologyNational Yang Ming Chiao Tung UniversityHsinchuTaiwan
| | - Li‐Fen Chen
- Institute of Brain ScienceNational Yang Ming Chiao Tung UniversityTaipeiTaiwan,Integrated Brain Research Unit, Department of Medical ResearchTaipei Veterans General HospitalTaipeiTaiwan,Brain Research CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| |
Collapse
|
20
|
Steiner F, Fernandez N, Dietziker J, Stämpfli SP, Seifritz E, Rey A, Frühholz FS. Affective speech modulates a cortico-limbic network in real time. Prog Neurobiol 2022; 214:102278. [DOI: 10.1016/j.pneurobio.2022.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
|
21
|
Bálint A, Eleőd H, Magyari L, Kis A, Gácsi M. Differences in dogs' event-related potentials in response to human and dog vocal stimuli; a non-invasive study. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211769. [PMID: 35401994 PMCID: PMC8984299 DOI: 10.1098/rsos.211769] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 01/31/2022] [Indexed: 05/03/2023]
Abstract
Recent advances in the field of canine neuro-cognition allow for the non-invasive research of brain mechanisms in family dogs. Considering the striking similarities between dog's and human (infant)'s socio-cognition at the behavioural level, both similarities and differences in neural background can be of particular relevance. The current study investigates brain responses of n = 17 family dogs to human and conspecific emotional vocalizations using a fully non-invasive event-related potential (ERP) paradigm. We found that similarly to humans, dogs show a differential ERP response depending on the species of the caller, demonstrated by a more positive ERP response to human vocalizations compared to dog vocalizations in a time window between 250 and 650 ms after stimulus onset. A later time window between 800 and 900 ms also revealed a valence-sensitive ERP response in interaction with the species of the caller. Our results are, to our knowledge, the first ERP evidence to show the species sensitivity of vocal neural processing in dogs along with indications of valence sensitive processes in later post-stimulus time periods.
Collapse
Affiliation(s)
- Anna Bálint
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Huba Eleőd
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
- Doctoral School of Biology, Institute of Biology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Lilla Magyari
- MTA-ELTE ‘Lendület’ Neuroethology of Communication Research Group, Hungarian Academy of Sciences, ELTE Eötvös Loránd University, Budapest, Hungary
- Department of Social Studies, University of Stavanger, Stavanger, Norway
| | - Anna Kis
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences,Budapest, Hungary
| | - Márta Gácsi
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
22
|
Schmidt NM, Hennig J, Munk AJL. Event-Related Potentials in Women on the Pill: Neural Correlates of Positive and Erotic Stimulus Processing in Oral Contraceptive Users. Front Neurosci 2022; 15:798823. [PMID: 35058744 PMCID: PMC8764149 DOI: 10.3389/fnins.2021.798823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 12/09/2021] [Indexed: 11/13/2022] Open
Abstract
Background/Aims: Exposure toward positive emotional cues with - and without - reproductive significance plays a crucial role in daily life and regarding well-being as well as mental health. While possible adverse effects of oral contraceptive (OC) use on female mental and sexual health are widely discussed, neural processing of positive emotional stimuli has not been systematically investigated in association with OC use. Considering reported effects on mood, well-being and sexual function, and proposed associations with depression, it was hypothesized that OC users showed reduced neural reactivity toward positive and erotic emotional stimuli during early as well as later stages of emotional processing and also rated these stimuli as less pleasant and less arousing compared to naturally cycling (NC) women. Method: Sixty-two female subjects (29 NC and 33 OC) were assessed at three time points across the natural menstrual cycle and corresponding time points of the OC regimen. Early (early posterior negativity, EPN) and late (late positive potential, LPP) event-related potentials in reaction to positive, erotic and neutral stimuli were collected during an Emotional Picture Stroop Paradigm (EPSP). At each appointment, subjects provided saliva samples for analysis of gonadal steroid concentration. Valence and arousal ratings were collected at the last appointment. Results: Oral contraceptive users had significantly lower endogenous estradiol and progesterone concentrations compared to NC women. No significant group differences in either subjective stimulus evaluations or neural reactivity toward positive and erotic emotional stimuli were observed. For the OC group, LPP amplitudes in reaction to erotic vs. neutral pictures differed significantly between measurement times across the OC regimen. Discussion: In this study, no evidence regarding alterations of neural reactivity toward positive and erotic stimuli in OC users compared to NC was found. Possible confounding factors and lines for future research are elaborated and discussed.
Collapse
Affiliation(s)
- Norina M. Schmidt
- Department of Differential and Biological Psychology, University of Giessen, Giessen, Germany
| | | | | |
Collapse
|
23
|
Modulation of Auditory Perception Laterality under Anxiety and Depression Conditions. Symmetry (Basel) 2021. [DOI: 10.3390/sym14010024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The objective of this work is to confirm the asymmetry in non-linguistic auditory perception, as well as the influence of anxiety-depressive disorders on it. Eighty-six people were recruited in the emotional well-being group, fifty-six in the anxiety group, fourteen in the depression group, and seventy-seven in the mixed group. In each group, audiograms were obtained from both ears and the differences were statistically analyzed. Differences in hearing sensitivity were found between both ears in the general population, such differences increased in people with anxiety-depressive disorders. When faced with anxiety-depressive disorders, the right ear suffered greater hearing loss than the left, showing peaks of hyper-hearing at the frequency of 4000 Hz in the anxiety subgroup, and hearing loss in the depression subgroup. In relation to anxiety, the appearance of the 4:8 pattern was observed in the right ear when the person had suffered acute stress in the 2 days prior to the audiometry, and in both ears if they had suffered stress in the 3–30 days before said stress. In conclusion, the advantage of the left ear in auditory perception was increased with these disorders, showing a hyperaudition peak in anxiety and a hearing loss in depression.
Collapse
|
24
|
The evolutionary benefit of less-credible affective musical signals for emotion induction during storytelling. Behav Brain Sci 2021; 44:e118. [PMID: 34588032 DOI: 10.1017/s0140525x20001004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The credible signaling theory underexplains the evolutionary added value of less-credible affective musical signals compared to vocal signals. The theory might be extended to account for the motivation for, and consequences of, culturally decontextualizing a biologically contextualized signal. Musical signals are twofold, communicating "emotional fiction" alongside biological meaning, and could have filled an adaptive need for affect induction during storytelling.
Collapse
|
25
|
Holz N, Larrouy-Maestri P, Poeppel D. The paradoxical role of emotional intensity in the perception of vocal affect. Sci Rep 2021; 11:9663. [PMID: 33958630 PMCID: PMC8102532 DOI: 10.1038/s41598-021-88431-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 04/09/2021] [Indexed: 11/08/2022] Open
Abstract
Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (N = 90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature.
Collapse
Affiliation(s)
- N Holz
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - P Larrouy-Maestri
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
| | - D Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
26
|
Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective. Atten Percept Psychophys 2021; 83:2159-2173. [PMID: 33759116 DOI: 10.3758/s13414-021-02281-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2021] [Indexed: 11/08/2022]
Abstract
A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.
Collapse
|
27
|
Swanborough H, Staib M, Frühholz S. Neurocognitive dynamics of near-threshold voice signal detection and affective voice evaluation. SCIENCE ADVANCES 2020; 6:6/50/eabb3884. [PMID: 33310844 PMCID: PMC7732184 DOI: 10.1126/sciadv.abb3884] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 10/29/2020] [Indexed: 05/10/2023]
Abstract
Communication and voice signal detection in noisy environments are universal tasks for many species. The fundamental problem of detecting voice signals in noise (VIN) is underinvestigated especially in its temporal dynamic properties. We investigated VIN as a dynamic signal-to-noise ratio (SNR) problem to determine the neurocognitive dynamics of subthreshold evidence accrual and near-threshold voice signal detection. Experiment 1 showed that dynamic VIN, including a varying SNR and subthreshold sensory evidence accrual, is superior to similar conditions with nondynamic SNRs or with acoustically matched sounds. Furthermore, voice signals with affective meaning have a detection advantage during VIN. Experiment 2 demonstrated that VIN is driven by an effective neural integration in an auditory cortical-limbic network at and beyond the near-threshold detection point, which is preceded by activity in subcortical auditory nuclei. This demonstrates the superior recognition advantage of communication signals in dynamic noise contexts, especially when carrying socio-affective meaning.
Collapse
Affiliation(s)
- Huw Swanborough
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
28
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
29
|
Belekhova MG, Kenigfest NB, Chmykhova NM. Evolutionary Formation and Functional
Significance
of the Core–Belt Pattern of Neural Organization of Rostral Auditory
Centers in Vertebrates. J EVOL BIOCHEM PHYS+ 2020. [DOI: 10.1134/s0022093020040018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
30
|
Yu N, Cai J, Xu X, Yang Y, Sun J. Masking effects on subjective annoyance to aircraft flyover noise: An fMRI study. Hum Brain Mapp 2020; 41:3284-3294. [PMID: 32379391 PMCID: PMC7375093 DOI: 10.1002/hbm.25016] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/06/2020] [Accepted: 04/08/2020] [Indexed: 12/16/2022] Open
Abstract
Sound masking, a new noise control technology, has been applied to improve subjective perception of noise in recent years. However, the neural mechanisms underlying this technology are still unclear. In this study, 18 healthy subjects were recurited to take subjective annoyance assessments and fMRI scanning with the aircraft noise and the masked aircraft noise. The results showed that the noise annoyance was associated with deficient functional connectivity between anterior cingulate cortex (ACC) and prefrontal cortex and exceeded brain activation in ACC, which might be explained as compensation. The sound masking led to significantly strong activation in the left medial frontal cortex and right medial orbital frontal cortex, which were associated with happy emotion induced by sound masking. This study offered new insights on the underlying neural mechanisms of sound masking effects.
Collapse
Affiliation(s)
- Nishuai Yu
- School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Cai
- School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xuanyue Xu
- School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yining Yang
- School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Junfeng Sun
- Shanghai Med-X Engineering Research Center, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Brain Science and Technology Research Center, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
31
|
Pralus A, Belfi A, Hirel C, Lévêque Y, Fornoni L, Bigand E, Jung J, Tranel D, Nighoghossian N, Tillmann B, Caclin A. Recognition of musical emotions and their perceived intensity after unilateral brain damage. Cortex 2020; 130:78-93. [PMID: 32645502 DOI: 10.1016/j.cortex.2020.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 10/24/2022]
Abstract
For the hemispheric laterality of emotion processing in the brain, two competing hypotheses are currently still debated. The first hypothesis suggests a greater involvement of the right hemisphere in emotion perception whereas the second hypothesis suggests different involvements of each hemisphere as a function of the valence of the emotion. These hypotheses are based on findings for facial and prosodic emotion perception. Investigating emotion perception for other stimuli, such as music, should provide further insight and potentially help to disentangle between these two hypotheses. The present study investigated musical emotion perception in patients with unilateral right brain damage (RBD, n = 16) or left brain damage (LBD, n = 16), as well as in matched healthy comparison participants (n = 28). The experimental task required explicit recognition of musical emotions as well as ratings on the perceived intensity of the emotion. Compared to matched comparison participants, musical emotion recognition was impaired only in LBD participants, suggesting a potential specificity of the left hemisphere for explicit emotion recognition in musical material. In contrast, intensity ratings of musical emotions revealed that RBD patients underestimated the intensity of negative emotions compared to positive emotions, while LBD patients and comparisons did not show this pattern. To control for a potential generalized emotion deficit for other types of stimuli, we also tested facial emotion recognition in the same patients and their matched healthy comparisons. This revealed that emotion recognition after brain damage might depend on the stimulus category or modality used. These results are in line with the hypothesis of a deficit of emotion perception depending on lesion laterality and valence in brain-damaged participants. The present findings provide critical information to disentangle the currently debated competing hypotheses and thus allow for a better characterization of the involvement of each hemisphere for explicit emotion recognition and their perceived intensity.
Collapse
Affiliation(s)
- Agathe Pralus
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France.
| | - Amy Belfi
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, MO, USA
| | - Catherine Hirel
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France
| | - Yohana Lévêque
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Lesly Fornoni
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Emmanuel Bigand
- LEAD, CNRS, UMR 5022, University of Bourgogne, Dijon, France
| | - Julien Jung
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France
| | - Daniel Tranel
- Department of Neurology, University of Iowa, Iowa City, IA, USA
| | - Norbert Nighoghossian
- University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France; CREATIS, CNRS, UMR5220, INSERM, U1044, University Lyon 1, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| |
Collapse
|
32
|
Gruber T, Debracque C, Ceravolo L, Igloi K, Marin Bosch B, Frühholz S, Grandjean D. Human Discrimination and Categorization of Emotions in Voices: A Functional Near-Infrared Spectroscopy (fNIRS) Study. Front Neurosci 2020; 14:570. [PMID: 32581695 PMCID: PMC7290129 DOI: 10.3389/fnins.2020.00570] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/08/2020] [Indexed: 11/24/2022] Open
Abstract
Functional Near-Infrared spectroscopy (fNIRS) is a neuroimaging tool that has been recently used in a variety of cognitive paradigms. Yet, it remains unclear whether fNIRS is suitable to study complex cognitive processes such as categorization or discrimination. Previously, functional imaging has suggested a role of both inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we extended paradigms used in functional magnetic resonance imaging (fMRI) to investigate the suitability of fNIRS to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination using mini-blocks and event-related stimuli. Participants heard speech-like but semantically meaningless pseudowords spoken in various tones and evaluated them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize; and processed the linguistic faster than the emotional content of stimuli. Interactions between condition (emotion/word), task (discrimination/categorization) and emotion content (anger, fear, neutral) influenced accuracy and reaction time. At the brain level, we found a modulation of the Oxy-Hb changes in IFG depending on condition, task, emotion and hemisphere (right or left), highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation, fostering its application to complex cognitive paradigms.
Collapse
Affiliation(s)
- Thibaud Gruber
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.,Cognitive Science Center, University of Neuchâtel, Neuchâtel, Switzerland
| | - Coralie Debracque
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Kinga Igloi
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland
| | - Blanca Marin Bosch
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zürich, Zurich, Switzerland.,Center for Integrative Human Physiology, University of Zurich, Zurich, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
33
|
Behavioral and neuroanatomical effects on exposure to White noise in rats. Neurosci Lett 2020; 728:134898. [DOI: 10.1016/j.neulet.2020.134898] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 03/04/2020] [Accepted: 03/06/2020] [Indexed: 12/13/2022]
|
34
|
Young AW, Frühholz S, Schweinberger SR. Face and Voice Perception: Understanding Commonalities and Differences. Trends Cogn Sci 2020; 24:398-410. [DOI: 10.1016/j.tics.2020.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 01/16/2020] [Accepted: 02/03/2020] [Indexed: 01/01/2023]
|
35
|
Belkhiria C, Vergara RC, San Martin S, Leiva A, Martinez M, Marcenaro B, Andrade M, Delano PH, Delgado C. Insula and Amygdala Atrophy Are Associated With Functional Impairment in Subjects With Presbycusis. Front Aging Neurosci 2020; 12:102. [PMID: 32410980 PMCID: PMC7198897 DOI: 10.3389/fnagi.2020.00102] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2019] [Accepted: 03/26/2020] [Indexed: 01/07/2023] Open
Abstract
Hearing loss is an important risk factor for dementia. However, the mechanisms that relate these disorders are still unknown. As a proxy of this relationship, we studied the structural brain changes associated with functional impairment in activities of daily living in subjects with age related hearing loss, or presbycusis. One hundred eleven independent, non-demented subjects older than 65 years recruited in the ANDES cohort were evaluated using a combined approach including (i) audiological tests: hearing thresholds and cochlear function measured by pure tone averages and the distortion product otoacoustic emissions respectively; (ii) behavioral variables: cognitive, neuropsychiatric, and functional impairment in activities of daily living measured by validated questionnaires; and (iii) structural brain imaging—assessed by magnetic resonance imaging at 3 Tesla. The mean age of the recruited subjects (69 females) was 73.95 ± 5.47 years (mean ± SD) with an average educational level of 9.44 ± 4.2 years of schooling. According to the audiometric hearing thresholds and presence of otoacoustic emissions, we studied three groups: controls with normal hearing (n = 36), presbycusis with preserved cochlear function (n = 33), and presbycusis with cochlear dysfunction (n = 38). We found a significant association (R2D = 0.17) between the number of detected otoacoustic emissions and apathy symptoms. The presbycusis with cochlear dysfunction group had worse performance than controls in global cognition, language and executive functions, and severe apathy symptoms than the other groups. The neuropsychiatric symptoms and language deficits were the main determinants of functional impairment in both groups of subjects with presbycusis. Atrophy of insula, amygdala, and other temporal areas were related with functional impairment, apathy, and language deficits in the presbycusis with cochlear dysfunction group. We conclude that (i) the neuropsychiatric symptoms had a major effect on functional loss in subjects with presbycusis, (ii) cochlear dysfunction is relevant for the association between hearing loss and behavioral impairment, and (iii) atrophy of the insula and amygdala among other temporal areas are related with hearing loss and behavioral impairment.
Collapse
Affiliation(s)
- Chama Belkhiria
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Rodrigo C Vergara
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Kinesiology Department, Facultad de Artes y Educación Física, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
| | - Simón San Martin
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Alexis Leiva
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Melissa Martinez
- Neurology and Neurosurgery Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Bruno Marcenaro
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Maricarmen Andrade
- Internal Medicine Department, Clínica Universidad de los Andes, Santiago, Chile
| | - Paul H Delano
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Otolaryngology Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile.,Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaíso, Chile.,Biomedical Neuroscience Institute, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Carolina Delgado
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Neurology and Neurosurgery Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| |
Collapse
|
36
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
37
|
Daniju Y, Bossong MG, Brandt K, Allen P. Do the effects of cannabis on the hippocampus and striatum increase risk for psychosis? Neurosci Biobehav Rev 2020; 112:324-335. [PMID: 32057817 DOI: 10.1016/j.neubiorev.2020.02.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 01/17/2020] [Accepted: 02/10/2020] [Indexed: 11/19/2022]
Abstract
Cannabis use is associated with increased risk of psychotic symptoms and in a small number of cases it can lead to psychoses. This review examines the neurobiological mechanisms that mediate the link between cannabis use and psychosis risk. We use an established preclinical model of psychosis, the methylazoxymethanol acetate (MAM) rodent model, as a framework to examine if psychosis risk in some cannabis users is mediated by the effects of cannabis on the hippocampus, and this region's role in the regulation of mesolimbic dopamine. We also examine how cannabis affects excitatory neurotransmission known to regulate hippocampal neural activity and output. Whilst there is clear evidence that cannabis/cannabinoids can affect hippocampal and medial temporal lobe function and structure, the evidence that cannabis/cannabinoids increase striatal dopamine function is less robust. There is limited evidence that cannabis use affects cortical and striatal glutamate levels, but there are currently too few studies to draw firm conclusions. Future work is needed to test the MAM model in relation to cannabis using multimodal neuroimaging approaches.
Collapse
Affiliation(s)
- Y Daniju
- Department of Psychology, University of Roehampton, London, UK
| | - M G Bossong
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, the Netherlands
| | - K Brandt
- Department of Psychology, University of Roehampton, London, UK
| | - P Allen
- Department of Psychology, University of Roehampton, London, UK; Department of Psychosis Studies, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK; Icahn School of Medicine at Mount Sinai Hospital, New York, USA.
| |
Collapse
|
38
|
Lin H, Müller-Bardorff M, Gathmann B, Brieke J, Mothes-Lasch M, Bruchmann M, Miltner WHR, Straube T. Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities. Sci Rep 2020; 10:1898. [PMID: 32024891 PMCID: PMC7002496 DOI: 10.1038/s41598-020-58839-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/23/2019] [Indexed: 11/08/2022] Open
Abstract
The factors that drive amygdalar responses to emotionally significant stimuli are still a matter of debate - particularly the proneness of the amygdala to respond to negatively-valenced stimuli has been discussed controversially. Furthermore, it is uncertain whether the amygdala responds in a modality-general fashion or whether modality-specific idiosyncrasies exist. Therefore, the present functional magnetic resonance imaging (fMRI) study systematically investigated amygdalar responding to stimulus valence and arousal of emotional expressions across visual and auditory modalities. During scanning, participants performed a gender judgment task while prosodic and facial emotional expressions were presented. The stimuli varied in stimulus valence and arousal by including neutral, happy and angry expressions of high and low emotional intensity. Results demonstrate amygdalar activation as a function of stimulus arousal and accordingly associated emotional intensity regardless of stimulus valence. Furthermore, arousal-driven amygdalar responding did not depend on the visual and auditory modalities of emotional expressions. Thus, the current results are consistent with the notion that the amygdala codes general stimulus relevance across visual and auditory modalities irrespective of valence. In addition, whole brain analyses revealed that effects in visual and auditory areas were driven mainly by high intense emotional facial and vocal stimuli, respectively, suggesting modality-specific representations of emotional expressions in auditory and visual cortices.
Collapse
Affiliation(s)
- Huiyan Lin
- Institute of Applied Psychology, School of Public Administration, Guangdong University of Finance, 510521, Guangzhou, China.
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany.
| | - Miriam Müller-Bardorff
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Bettina Gathmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Jaqueline Brieke
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Martin Mothes-Lasch
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| | - Wolfgang H R Miltner
- Department of Clinical Psychology, Friedrich Schiller University of Jena, 07743, Jena, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149, Muenster, Germany
| |
Collapse
|
39
|
Koch SBJ, Galli A, Volman I, Kaldewaij R, Toni I, Roelofs K. Neural Control of Emotional Actions in Response to Affective Vocalizations. J Cogn Neurosci 2020; 32:977-988. [PMID: 31933433 DOI: 10.1162/jocn_a_01523] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Social-emotional cues, such as affective vocalizations and emotional faces, automatically elicit emotional action tendencies. Adaptive social-emotional behavior depends on the ability to control these automatic action tendencies. It remains unknown whether neural control over automatic action tendencies is supramodal or relies on parallel modality-specific neural circuits. Here, we address this largely unexplored issue in humans. We consider neural circuits supporting emotional action control in response to affective vocalizations, using an approach-avoidance task known to reliably index control over emotional action tendencies elicited by emotional faces. We isolate supramodal neural contributions to emotional action control through a conjunction analysis of control-related neural activity evoked by auditory and visual affective stimuli, the latter from a previously published data set obtained in an independent sample. We show that the anterior pFC (aPFC) supports control of automatic action tendencies in a supramodal manner, that is, triggered by either emotional faces or affective vocalizations. When affective vocalizations are heard and emotional control is required, the aPFC supports control through negative functional connectivity with the posterior insula. When emotional faces are seen and emotional control is required, control relies on the same aPFC territory downregulating the amygdala. The findings provide evidence for a novel mechanism of emotional action control with a hybrid hierarchical architecture, relying on a supramodal node (aPFC) implementing an abstract goal by modulating modality-specific nodes (posterior insula, amygdala) involved in signaling motivational significance of either affective vocalizations or faces.
Collapse
Affiliation(s)
- Saskia B J Koch
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| | - Alessandra Galli
- Donders Institute for Brain, Cognition and Behavior, Radboud University
| | - Inge Volman
- Wellcome Centre for Integrative Neuroimaging, Oxford, UK
| | - Reinoud Kaldewaij
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| | - Ivan Toni
- Donders Institute for Brain, Cognition and Behavior, Radboud University
| | - Karin Roelofs
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| |
Collapse
|
40
|
Kuo PC, Tseng YL, Zilles K, Suen S, Eickhoff SB, Lee JD, Cheng PE, Liou M. Brain dynamics and connectivity networks under natural auditory stimulation. Neuroimage 2019; 202:116042. [PMID: 31344485 DOI: 10.1016/j.neuroimage.2019.116042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/17/2019] [Accepted: 07/20/2019] [Indexed: 02/03/2023] Open
Abstract
The analysis of functional magnetic resonance imaging (fMRI) data is challenging when subjects are under exposure to natural sensory stimulation. In this study, a two-stage approach was developed to enable the identification of connectivity networks involved in the processing of information in the brain under natural sensory stimulation. In the first stage, the degree of concordance between the results of inter-subject and intra-subject correlation analyses is assessed statistically. The microstructurally (i.e., cytoarchitectonically) defined brain areas are designated either as concordant in which the results of both correlation analyses are in agreement, or as discordant in which one analysis method shows a higher proportion of supra-threshold voxels than does the other. In the second stage, connectivity networks are identified using the time courses of supra-threshold voxels in brain areas contingent upon the classifications derived in the first stage. In an empirical study, fMRI data were collected from 40 young adults (19 males, average age 22.76 ± 3.25), who underwent auditory stimulation involving sound clips of human voices and animal vocalizations under two operational conditions (i.e., eyes-closed and eyes-open). The operational conditions were designed to assess confounding effects due to auditory instructions or visual perception. The proposed two-stage analysis demonstrated that stress modulation (affective) and language networks in the limbic and cortical structures were respectively engaged during sound stimulation, and presented considerable variability among subjects. The network involved in regulating visuomotor control was sensitive to the eyes-open instruction, and presented only small variations among subjects. A high degree of concordance was observed between the two analyses in the primary auditory cortex which was highly sensitive to the pitch of sound clips. Our results have indicated that brain areas can be identified as concordant or discordant based on the two correlation analyses. This may further facilitate the search for connectivity networks involved in the processing of information under natural sensory stimulation.
Collapse
Affiliation(s)
- Po-Chih Kuo
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Yi-Li Tseng
- Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Summit Suen
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - Juin-Der Lee
- Graduate Institute of Business Administration, National Chengchi University, Taipei, Taiwan
| | - Philip E Cheng
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Michelle Liou
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan.
| |
Collapse
|
41
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
42
|
Emotional prosody Stroop effect in Hindi: An event related potential study. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31196434 DOI: 10.1016/bs.pbr.2019.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Prosody processing is an important aspect of language comprehension. Previous research on emotional word-prosody conflict has shown that participants are worse when emotional prosody and word meaning are incongruent. Studies with event-related potentials have shown a congruency effect in N400 component. There has been no study on emotional processing in Hindi language in the context of conflict between emotional word meaning and prosody. We used happy and angry words spoken using happy and angry prosody. Participants had to identify whether the word had a happy or angry word meaning. The results showed a congruency effect with worse performance in incongruent trials indicating an emotional Stroop effect in Hindi. The ERP results showed that prosody information is detected very early, which can be seen in the N1 component. In addition, there was a congruency effect in N400. The results show that prosody is processed very early and emotional meaning-prosody congruency effect is obtained with Hindi. Further studies would be needed to investigate similarities and differences in cognitive control associated with language processing.
Collapse
|
43
|
Tanaka S, Kirino E. Increased Functional Connectivity of the Angular Gyrus During Imagined Music Performance. Front Hum Neurosci 2019; 13:92. [PMID: 30936827 PMCID: PMC6431621 DOI: 10.3389/fnhum.2019.00092] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 02/27/2019] [Indexed: 11/26/2022] Open
Abstract
The angular gyrus (AG) is a hub of several networks that are involved in various functions, including attention, self-processing, semantic information processing, emotion regulation, and mentalizing. Since these functions are required in music performance, it is likely that the AG plays a role in music performance. Considering that these functions emerge as network properties, this study analyzed the functional connectivity of the AG during the imagined music performance task and the resting condition. Our hypothesis was that the functional connectivity of the AG is modulated by imagined music performance. In the resting condition, the AG had connections with the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and precuneus as well as the superior and inferior frontal gyri and with the temporal cortex. Compared with the resting condition, imagined music performance increased the functional connectivity of the AG with the superior frontal gyrus (SFG), mPFC, precuneus, PCC, hippocampal/parahippocampal gyrus (H/PHG), and amygdala. The anterior cingulate cortex (ACC) and superior temporal gyrus (STG) were newly engaged or added to the AG network during the task. In contrast, the supplementary motor area (SMA), sensorimotor areas, and occipital regions, which were anti-correlated with the AG in the resting condition, were disengaged during the task. These results lead to the conclusion that the functional connectivity of the AG is modulated by imagined music performance, which suggests that the AG plays a role in imagined music performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|
44
|
Hellbernd N, Sammler D. Neural bases of social communicative intentions in speech. Soc Cogn Affect Neurosci 2019; 13:604-615. [PMID: 29771359 PMCID: PMC6022564 DOI: 10.1093/scan/nsy034] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 05/13/2018] [Indexed: 11/15/2022] Open
Abstract
Our ability to understand others’ communicative intentions in speech is key to successful social interaction. Indeed, misunderstanding an ‘excuse me’ as apology, while meant as criticism, may have important consequences. Recent behavioural studies have provided evidence that prosody, that is, vocal tone, is an important indicator for speakers’ intentions. Using a novel audio-morphing paradigm, the present functional magnetic resonance imaging study examined the neurocognitive mechanisms that allow listeners to ‘read’ speakers’ intents from vocal prosodic patterns. Participants categorized prosodic expressions that gradually varied in their acoustics between criticism, doubt, and suggestion. Categorizing typical exemplars of the three intentions induced activations along the ventral auditory stream, complemented by amygdala and mentalizing system. These findings likely depict the stepwise conversion of external perceptual information into abstract prosodic categories and internal social semantic concepts, including the speaker’s mental state. Ambiguous tokens, in turn, involved cingulo-opercular areas known to assist decision-making in case of conflicting cues. Auditory and decision-making processes were flexibly coupled with the amygdala, depending on prosodic typicality, indicating enhanced categorization efficiency of overtly relevant, meaningful prosodic signals. Altogether, the results point to a model in which auditory prosodic categorization and socio-inferential conceptualization cooperate to translate perceived vocal tone into a coherent representation of the speaker’s intent.
Collapse
Affiliation(s)
- Nele Hellbernd
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, D-04103 Leipzig, Germany
| |
Collapse
|
45
|
Abstract
Understanding how the brain translates a structured sequence of sounds, such as music, into a pleasant and rewarding experience is a fascinating question which may be crucial to better understand the processing of abstract rewards in humans. Previous neuroimaging findings point to a challenging role of the dopaminergic system in music-evoked pleasure. However, there is a lack of direct evidence showing that dopamine function is causally related to the pleasure we experience from music. We addressed this problem through a double blind within-subject pharmacological design in which we directly manipulated dopaminergic synaptic availability while healthy participants (n = 27) were engaged in music listening. We orally administrated to each participant a dopamine precursor (levodopa), a dopamine antagonist (risperidone), and a placebo (lactose) in three different sessions. We demonstrate that levodopa and risperidone led to opposite effects in measures of musical pleasure and motivation: while the dopamine precursor levodopa, compared with placebo, increased the hedonic experience and music-related motivational responses, risperidone led to a reduction of both. This study shows a causal role of dopamine in musical pleasure and indicates that dopaminergic transmission might play different or additive roles than the ones postulated in affective processing so far, particularly in abstract cognitive activities.
Collapse
|
46
|
Lateralized Brainstem and Cervical Spinal Cord Responses to Aversive Sounds: A Spinal fMRI Study. Brain Sci 2018; 8:brainsci8090165. [PMID: 30200289 PMCID: PMC6162493 DOI: 10.3390/brainsci8090165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 08/25/2018] [Accepted: 08/29/2018] [Indexed: 12/22/2022] Open
Abstract
Previous research has delineated the networks of brain structures involved in the perception of emotional auditory stimuli. These include the amygdala, insula, and auditory cortices, as well as frontal-lobe, basal ganglia, and cerebellar structures involved in the planning and execution of motoric behaviors. The aim of the current research was to examine whether emotional sounds also influence activity in the brainstem and cervical spinal cord. Seventeen undergraduate participants completed a spinal functional magnetic resonance imaging (fMRI) study consisting of two fMRI runs. One run consisted of three one-minute blocks of aversive sounds taken from the International Affective Digitized Sounds (IADS) stimulus set; these blocks were interleaved by 40-s rest periods. The other block consisted of emotionally neutral stimuli also drawn from the IADS. The results indicated a stark pattern of lateralization. Aversive sounds elicited greater activity than neutral sounds in the right midbrain and brainstem, and in right dorsal and ventral regions of the cervical spinal cord. Neutral stimuli, on the other hand, elicited less neural activity than aversive sounds overall; these responses were left lateralized and were found in the medial midbrain and the dorsal sensory regions of the cervical spinal cord. Together, these results demonstrate that aversive auditory stimuli elicit increased sensorimotor responses in brainstem and cervical spinal cord structures.
Collapse
|
47
|
Tan G, Xiao F, Chen S, Wang H, Chen D, Zhu L, Xu D, Zhou D, Liu L. Frequency-specific alterations in the amplitude and synchronization of resting-state spontaneous low-frequency oscillations in benign childhood epilepsy with centrotemporal spikes. Epilepsy Res 2018; 145:178-184. [PMID: 30048931 DOI: 10.1016/j.eplepsyres.2018.07.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 07/15/2018] [Accepted: 07/18/2018] [Indexed: 02/05/2023]
Abstract
OBJECTIVES Spontaneous low-frequency oscillations in different frequency bands have diverse physiological meanings. The amplitude of low-frequency fluctuation (ALFF) and functional connectivity (FC) in different frequency bands in Benign Childhood Epilepsy with Centrotemporal Spikes (BECTS) are unknown and worth exploring. METHOD Resting-state functional magnetic resonance imaging data were collected in 51 drug-naïve BECTS patients and 76 healthy controls. The ALFF was calculated for the typical (0.01 - 0.08 Hz), slow-5 (0.01-0.027 Hz), slow-4 (0.027-0.073 Hz), and slow-3 (0.073-0.198 Hz) frequency bands. The bilateral precuneus/posterior cingulate cortex (PCU/PCC) showed a common alteration of ALFF in different frequency bands, and was selected as the seed for calculating FC per voxel. RESULTS In the typical band, BECTS patients showed increased ALFF in the left rolandic operculum and the right pre/postcentral gyrus, and decreased ALFF in the bilateral PCU/PCC, some of which were shared by the slow-5, slow-4, and slow-3 bands. Decreased ALFF in the left angular gyrus was also found in the slow-3 band. Only the bilateral PCU/PCC showed a frequency-dependent correlation with the total seizure frequency and full-scale intelligence quotient. Regions having degenerated FC with the bilateral PCU/PCC in BECTS patients were mainly in the left prefrontal cortex and bilateral anterior cingulate cortex for the typical and slow-5 bands, and in the bilateral temporal limbic system and striatum for the slow-4 and slow-3 bands. CONCLUSION Alteration of the ALFF and FC differed with distinct frequency bands. Therefore, employing different frequency bands would provide more meaningful findings for BECTS patients.
Collapse
Affiliation(s)
- Ge Tan
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Fenglai Xiao
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Sihan Chen
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Haijiao Wang
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Deng Chen
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Lina Zhu
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Da Xu
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China
| | - Dong Zhou
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China.
| | - Ling Liu
- Department of Neurology, West China Hospital, Sichuan University, No. 37, Guo Xue Xiang, Chengdu, 610041, Sichuan Province, China.
| |
Collapse
|
48
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Decoding the neural signatures of emotions expressed through sound. Neuroimage 2018; 174:1-10. [DOI: 10.1016/j.neuroimage.2018.02.058] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 02/23/2018] [Accepted: 02/27/2018] [Indexed: 12/15/2022] Open
|
49
|
Aryani A, Hsu CT, Jacobs AM. The Sound of Words Evokes Affective Brain Responses. Brain Sci 2018; 8:brainsci8060094. [PMID: 29789504 PMCID: PMC6025608 DOI: 10.3390/brainsci8060094] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 05/17/2018] [Accepted: 05/21/2018] [Indexed: 12/19/2022] Open
Abstract
The long history of poetry and the arts, as well as recent empirical results suggest that the way a word sounds (e.g., soft vs. harsh) can convey affective information related to emotional responses (e.g., pleasantness vs. harshness). However, the neural correlates of the affective potential of the sound of words remain unknown. In an fMRI study involving passive listening, we focused on the affective dimension of arousal and presented words organized in two discrete groups of sublexical (i.e., sound) arousal (high vs. low), while controlling for lexical (i.e., semantic) arousal. Words sounding high arousing, compared to their low arousing counterparts, resulted in an enhanced BOLD signal in bilateral posterior insula, the right auditory and premotor cortex, and the right supramarginal gyrus. This finding provides first evidence on the neural correlates of affectivity in the sound of words. Given the similarity of this neural network to that of nonverbal emotional expressions and affective prosody, our results support a unifying view that suggests a core neural network underlying any type of affective sound processing.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
| | - Chun-Ting Hsu
- Department of Psychology, Pennsylvania State University, PA 16802, USA.
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
- Centre for Cognitive Neuroscience Berlin (CCNB), Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
| |
Collapse
|
50
|
Sachs M, Habibi A, Damasio H. Reflections on music, affect, and sociality. PROGRESS IN BRAIN RESEARCH 2018; 237:153-172. [PMID: 29779733 DOI: 10.1016/bs.pbr.2018.03.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Music is an important facet of and practice in human cultures, significantly related to its capacity to induce a range of intense and complex emotions. Studying the psychological and neurophysiological responses to music allows us to examine and uncover the neural mechanisms underlying the emotional impact of music. We provide an overview of different aspects of current research on how music listening produces emotions and the corresponding feelings, and consider the underlying neurophysiological mechanisms. We conclude with evidence to suggest that musical training may influence the ability to recognize the emotions of others.
Collapse
Affiliation(s)
- Matthew Sachs
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States
| | - Hanna Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States.
| |
Collapse
|