1
|
Kreiner H, Eviatar Z. The sound of thought: Form matters-The prosody of inner speech. Phys Life Rev 2024; 51:231-242. [PMID: 39442498 DOI: 10.1016/j.plrev.2024.10.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Accepted: 10/16/2024] [Indexed: 10/25/2024]
Abstract
This paper offers a new perspective on inner speech based on the theoretical framework of embodiment, focusing on the embodiment of structure rather than content. We argue that inner speech is used to simulate the acoustic aspects of overt speech including prosody. Prosody refers to the rhythm, intonation, and stress of spoken language, which is closely related to structural aspects of phrases, sentences, and larger language contexts such as discourse and narrative. We propose that inner speech gives form and structure to thought, and that this form is a necessary component of mental life. Thus, our paper opens with a review of the varieties of inner speech, followed by evidence concerning the form of inner speech, and finally, we discuss the functionality of inner speech. We consider cognitive and socio-emotional functions in which inner speech is involved and posit that inner speech serves as a simulation that maintains form and that this form serves different aspects of thought - attention, memory, emotion and self- regulation, social conceptualization, and narrative of self. In concluding, we address future research asking how inner speech contributes to making mental processes accessible to conscious thought, and whether accessibility to consciousness is related to form and structure.
Collapse
Affiliation(s)
| | - Zohar Eviatar
- Institute of Information Processing and Decision Making, University of Haifa, Israel; Psychology Department, University of Haifa, Israel
| |
Collapse
|
2
|
Panico F, Luciano SM, Salzillo A, Sagliano L, Trojano L. Investigating Cerebello-Frontal Circuits Associated with Emotional Prosody: A Double-Blind tDCS and fNIRS study. CEREBELLUM (LONDON, ENGLAND) 2024:10.1007/s12311-024-01741-7. [PMID: 39276299 DOI: 10.1007/s12311-024-01741-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/04/2024] [Indexed: 09/16/2024]
Abstract
The emotional and cognitive cerebellum has been explored by several studies in the past years. Recent evidence suggested the possible contribution of the cerebellum in processing emotional prosody, namely the ability to comprehend the emotional content of a given vocal utterance, likely mediated by anatomical and functional cerebello-prefrontal connections. In the present study, the involvement of a functional cerebello-prefrontal network in recognising emotional prosody was assessed by combining non-invasive anodal transcranial direct current stimulation (tDCS) over the right or the left cerebellum and functional Near Infrared Spectroscopy of the prefrontal cortex, in a double-blind within-subject experimental design on healthy participants. The results showed that right and, to a less extent, left cerebellar tDCS (as compared to sham stimulation) reduced neural activation in the prefrontal cortex while accuracy and reaction times at the vocal recognition task remained unchanged. These findings highlight functional properties of the cerebello-frontal connections and the psychophysiological effects of cerebellar brain stimulation, with possible clinical applications in psychiatric and neurological conditions.
Collapse
Affiliation(s)
- Francesco Panico
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy.
| | - Sharon Mara Luciano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Alessia Salzillo
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Laura Sagliano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Luigi Trojano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| |
Collapse
|
3
|
Burunat I, Levitin DJ, Toiviainen P. Breaking (musical) boundaries by investigating brain dynamics of event segmentation during real-life music-listening. Proc Natl Acad Sci U S A 2024; 121:e2319459121. [PMID: 39186645 PMCID: PMC11388323 DOI: 10.1073/pnas.2319459121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/26/2024] [Indexed: 08/28/2024] Open
Abstract
The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.
Collapse
Affiliation(s)
- Iballa Burunat
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Daniel J Levitin
- School of Social Sciences, Minerva University, San Francisco, CA 94103
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| |
Collapse
|
4
|
Sinvani RT, Fogel-Grinvald H, Sapir S. Self-Rated Confidence in Vocal Emotion Recognition Ability: The Role of Gender. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1413-1423. [PMID: 38625128 DOI: 10.1044/2024_jslhr-23-00373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE We studied the role of gender in metacognition of voice emotion recognition ability (ERA), reflected by self-rated confidence (SRC). To this end, we guided our study in two approaches: first, by examining the role of gender in voice ERA and SRC independently and second, by looking for gender effects on the ERA association with SRC. METHOD We asked 100 participants (50 men, 50 women) to interpret a set of vocal expressions portrayed by 30 actors (16 men, 14 women) as defined by their emotional meaning. Targets were 180 repetitive lexical sentences articulated in congruent emotional voices (anger, sadness, surprise, happiness, fear) and neutral expressions. Trial by trial, the participants were assigned retrospective SRC based on their emotional recognition performance. RESULTS A binomial generalized linear mixed model (GLMM) estimating ERA accuracy revealed a significant gender effect, with women encoders (speakers) yielding higher accuracy levels than men. There was no significant effect of the decoder's (listener's) gender. A second GLMM estimating SRC found a significant effect of encoder and decoder genders, with women outperforming men. Gamma correlations were significantly greater than zero for women and men decoders. CONCLUSIONS In spite of varying interpretations of gender in each independent rating (ERA and SRC), our results suggest that both men and women decoders were accurate in their metacognition regarding voice emotion recognition. Further research is needed to study how individuals of both genders use metacognitive knowledge in their emotional recognition and whether and how such knowledge contributes to effective social communication.
Collapse
Affiliation(s)
| | | | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Israel
| |
Collapse
|
5
|
Rizzo G, Martino D, Avanzino L, Avenanti A, Vicario CM. Social cognition in hyperkinetic movement disorders: a systematic review. Soc Neurosci 2023; 18:331-354. [PMID: 37580305 DOI: 10.1080/17470919.2023.2248687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 07/10/2023] [Accepted: 08/09/2023] [Indexed: 08/16/2023]
Abstract
Numerous lines of research indicate that our social brain involves a network of cortical and subcortical brain regions that are responsible for sensing and controlling body movements. However, it remains unclear whether movement disorders have a systematic impact on social cognition. To address this question, we conducted a systematic review examining the influence of hyperkinetic movement disorders (including Huntington disease, Tourette syndrome, dystonia, and essential tremor) on social cognition. Following the PRISMA guidelines and registering the protocol in the PROSPERO database (CRD42022327459), we analyzed 50 published studies focusing on theory of mind (ToM), social perception, and empathy. The results from these studies provide evidence of impairments in ToM and social perception in all hyperkinetic movement disorders, particularly during the recognition of negative emotions. Additionally, individuals with Huntington's Disease and Tourette syndrome exhibit empathy disorders. These findings support the functional role of subcortical structures (such as the basal ganglia and cerebellum), which are primarily responsible for movement disorders, in deficits related to social cognition.
Collapse
Affiliation(s)
- Gaetano Rizzo
- Dipartimento di Scienze Cognitive, Psicologiche, Pedagogiche e degli studi culturali, Università di Messina, Messina, Italy
| | - Davide Martino
- Department of Clinical Neurosciences, Hotchkiss Brain Institute, Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta, Canada
| | - Laura Avanzino
- Department of Experimental Medicine, Section of Human Physiology, University of Genoa, Genoa, Italy
| | - Alessio Avenanti
- Centro studi e ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia "Renzo Canestrari", Campus di Cesena, Alma Mater Studiorum Università di Bologna, Cesena, Italy
- Centro de Investigación en Neuropsicología y Neurociencias Cognitivas, Universidad Católica del Maule, Talca, Chile
| | - Carmelo Mario Vicario
- Dipartimento di Scienze Cognitive, Psicologiche, Pedagogiche e degli studi culturali, Università di Messina, Messina, Italy
| |
Collapse
|
6
|
Hazelton JL, Devenney E, Ahmed R, Burrell J, Hwang Y, Piguet O, Kumfor F. Hemispheric contributions toward interoception and emotion recognition in left-vs right-semantic dementia. Neuropsychologia 2023; 188:108628. [PMID: 37348648 DOI: 10.1016/j.neuropsychologia.2023.108628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 05/29/2023] [Accepted: 06/19/2023] [Indexed: 06/24/2023]
Abstract
BACKGROUND The hemispheric contributions toward interoception, the perception of internal bodily cues, and emotion recognition remains unclear. Semantic dementia cases with either left-dominant (i.e., left-SD) or right-dominant (i.e., right-SD) anterior temporal lobe atrophy experience emotion recognition difficulties, however, little is known about interoception in these syndromes. Here, we hypothesised that right-SD would show worse interoception and emotion recognition due to right-dominant atrophy. METHODS Thirty-five participants (8 left-SD; 6 right-SD; 21 controls) completed a monitoring task. Participants pressed a button when they: (1) felt their heartbeat, without pulse measurement (Interoception); or (2) heard a recorded heartbeat (Exteroception-control). Simultaneous ECG was recorded. Accuracy was calculated by comparing the event frequency (i.e., heartbeat or sound) to response frequency. Emotion recognition was assessed via the Facial Affect Selection Task. Voxel-based morphometry analyses identified neural correlates of interoception and emotion recognition. RESULTS Right-SD showed worse interoception than controls and left-SD (both p's < 0.001). Both patient groups showed worse emotion recognition than controls (right-SD: p < .001; left-SD: p = .018), and right-SD showed worse emotion recognition than left-SD (p = .003). Regression analyses revealed that worse emotion recognition was predicted by right-SD (p = .002), left-SD (p = .005), and impaired interoception (p = .004). Interoception and emotion were associated with the integrity of right-lateralised structures including the insula, temporal pole, thalamus, superior temporal gyrus, and hippocampus. CONCLUSION Our study provides the first evidence for impaired interoception in right-SD, suggesting that impaired emotion recognition in this syndrome is driven by inaccurate internal monitoring. Further we identified a common neurobiological basis for interoception and emotion in the right hemisphere.
Collapse
Affiliation(s)
- Jessica L Hazelton
- The University of Sydney, School of Psychology, Sydney, NSW, Australia; The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia
| | - Emma Devenney
- The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia; The University of Sydney, Faculty of Medicine and Health Translational Research Collective, Sydney, NSW, Australia
| | - Rebekah Ahmed
- The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia; Memory and Cognition Clinic, Department of Clinical Neurosciences, Royal Prince Alfred Hospital, Sydney, NSW, Australia
| | - James Burrell
- The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia; The University of Sydney, Concord Clinical School, Sydney, NSW, Australia
| | - Yun Hwang
- The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia; Gosford General Hospital, Gosford, NSW, Australia
| | - Olivier Piguet
- The University of Sydney, School of Psychology, Sydney, NSW, Australia; The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia
| | - Fiona Kumfor
- The University of Sydney, School of Psychology, Sydney, NSW, Australia; The University of Sydney, Brain and Mind Centre, Sydney, NSW, Australia.
| |
Collapse
|
7
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
8
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
9
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
10
|
Billig AJ, Lad M, Sedley W, Griffiths TD. The hearing hippocampus. Prog Neurobiol 2022; 218:102326. [PMID: 35870677 PMCID: PMC10510040 DOI: 10.1016/j.pneurobio.2022.102326] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/08/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022]
Abstract
The hippocampus has a well-established role in spatial and episodic memory but a broader function has been proposed including aspects of perception and relational processing. Neural bases of sound analysis have been described in the pathway to auditory cortex, but wider networks supporting auditory cognition are still being established. We review what is known about the role of the hippocampus in processing auditory information, and how the hippocampus itself is shaped by sound. In examining imaging, recording, and lesion studies in species from rodents to humans, we uncover a hierarchy of hippocampal responses to sound including during passive exposure, active listening, and the learning of associations between sounds and other stimuli. We describe how the hippocampus' connectivity and computational architecture allow it to track and manipulate auditory information - whether in the form of speech, music, or environmental, emotional, or phantom sounds. Functional and structural correlates of auditory experience are also identified. The extent of auditory-hippocampal interactions is consistent with the view that the hippocampus makes broad contributions to perception and cognition, beyond spatial and episodic memory. More deeply understanding these interactions may unlock applications including entraining hippocampal rhythms to support cognition, and intervening in links between hearing loss and dementia.
Collapse
Affiliation(s)
| | - Meher Lad
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, USA
| |
Collapse
|
11
|
Lenschow C, Mendes ARP, Lima SQ. Hearing, touching, and multisensory integration during mate choice. Front Neural Circuits 2022; 16:943888. [PMID: 36247731 PMCID: PMC9559228 DOI: 10.3389/fncir.2022.943888] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 06/28/2022] [Indexed: 12/27/2022] Open
Abstract
Mate choice is a potent generator of diversity and a fundamental pillar for sexual selection and evolution. Mate choice is a multistage affair, where complex sensory information and elaborate actions are used to identify, scrutinize, and evaluate potential mating partners. While widely accepted that communication during mate assessment relies on multimodal cues, most studies investigating the mechanisms controlling this fundamental behavior have restricted their focus to the dominant sensory modality used by the species under examination, such as vision in humans and smell in rodents. However, despite their undeniable importance for the initial recognition, attraction, and approach towards a potential mate, other modalities gain relevance as the interaction progresses, amongst which are touch and audition. In this review, we will: (1) focus on recent findings of how touch and audition can contribute to the evaluation and choice of mating partners, and (2) outline our current knowledge regarding the neuronal circuits processing touch and audition (amongst others) in the context of mate choice and ask (3) how these neural circuits are connected to areas that have been studied in the light of multisensory integration.
Collapse
Affiliation(s)
- Constanze Lenschow
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| | - Ana Rita P Mendes
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| | - Susana Q Lima
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| |
Collapse
|
12
|
Kim E, Seo HG, Seong MY, Kang MG, Kim H, Lee MY, Yoo RE, Hwang I, Choi SH, Oh BM. An exploratory study on functional connectivity after mild traumatic brain injury: Preserved global but altered local organization. Brain Behav 2022; 12:e2735. [PMID: 35993893 PMCID: PMC9480924 DOI: 10.1002/brb3.2735] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/26/2022] [Accepted: 07/20/2022] [Indexed: 12/18/2022] Open
Abstract
INTRODUCTION This study aimed to investigate alterations in whole-brain functional connectivity after a concussion using graph-theory analysis from global and local perspectives and explore the association between changes in the functional network properties and cognitive performance. METHODS Individuals with mild traumatic brain injury (mTBI, n = 29) within a month after injury, and age- and sex-matched healthy controls (n = 29) were included. Graph-theory measures on functional connectivity assessed using resting state functional magnetic resonance imaging data were acquired from each participant. These included betweenness centrality, strength, clustering coefficient, local efficiency, and global efficiency. Multi-domain cognitive functions were correlated with the graph-theory measures. RESULTS In comparison to the controls, the mTBI group showed preserved network characteristics at a global level. However, in the local network, we observed decreased betweenness centrality, clustering coefficient, and local efficiency in several brain areas, including the fronto-parietal attention network. Network strength at the local level showed mixed-results in different areas. The betweenness centrality of the right parahippocampus showed a significant positive correlation with the cognitive scores of the verbal learning test only in the mTBI group. CONCLUSION The intrinsic functional connectivity after mTBI is preserved globally, but is suboptimally organized locally in several areas. This possibly reflects the neurophysiological sequelae of a concussion. The present results may imply that the network property could be used as a potential indicator for clinical outcomes after mTBI.
Collapse
Affiliation(s)
- Eunkyung Kim
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
| | - Han Gil Seo
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea.,Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Min Yong Seong
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Min-Gu Kang
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Heejae Kim
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Min Yong Lee
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Korea
| | - Inpyeong Hwang
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Korea
| | - Byung-Mo Oh
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea.,Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul, Korea.,National Traffic Injury Rehabilitation Hospital, Yangpyeong, Korea
| |
Collapse
|
13
|
Zivan M, Gashri C, Habuba N, Horowitz-Kraus T. Reduced mother-child brain-to-brain synchrony during joint storytelling interaction interrupted by a media usage. Child Neuropsychol 2022; 28:918-937. [PMID: 35129078 DOI: 10.1080/09297049.2022.2034774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Parent-child synchrony is related to the quality of parent and child interactions and child development. One very emotionally and cognitively beneficial interaction in early childhood is Dialogic Reading (DR). Screen exposure was previously related to decreased parent-child interaction. Using a hyperscanning Electroencephalogram (EEG) method, the current study examined the neurobiological correlates for mother-child DR vs. mobile phone-interrupted DR in twenty-four white toddlers (24-42 months old, 8 girls) and their mothers. The DR-interrupted condition was related to decreased mother-child neural synchrony between the mother's language-related brain regions (left hemisphere) and the child's comprehension-related regions (right hemisphere) compared to the uninterrupted DR. This is the first neural evidence of the negative effect of parental smartphone use on parent-child interaction quality.
Collapse
Affiliation(s)
- Michal Zivan
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion.,Faculty of Education in Science and Technology, Technion - Israel Institute of Technology, Haifa, Israel
| | - Carmel Gashri
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion
| | - Nir Habuba
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion
| | - Tzipi Horowitz-Kraus
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion.,Faculty of Education in Science and Technology, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
14
|
Nishimura M, Song WJ. Region-dependent Millisecond Time-scale Sensitivity in Spectrotemporal Integrations in Guinea Pig Primary Auditory Cortex. Neuroscience 2022; 480:229-245. [PMID: 34762984 DOI: 10.1016/j.neuroscience.2021.10.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 11/18/2022]
Abstract
Spectrotemporal integration is a key function of our auditory system for discriminating spectrotemporally complex sounds, such as words. Response latency in the auditory cortex is known to change with the millisecond time-scale depending on acoustic parameters, such as sound frequency and intensity. The functional significance of the millisecond-range latency difference in the integration remains unclear. Actually, whether the auditory cortex has a sensitivity to the millisecond-range difference has not been systematically examined. Herein, we examined the sensitivity in the primary auditory cortex (A1) using voltage-sensitive dye imaging techniques in guinea pigs. Bandpass noise bursts in two different bands (band-noises), centered at 1 and 16 kHz, respectively, were used for the examination. Onset times of individual band-noises (spectral onset-times) were varied to virtually cancel or magnify the latency difference observed with the band-noises. Conventionally defined nonlinear effects in integration were analyzed at A1 with varying sound intensities (or response latencies) and/or spectral onset-times of the two band-noises. The nonlinear effect measured in the high-frequency region of the A1 linearly changed depending on the millisecond difference of the response onset-times, which were estimated from the spatially-local response latencies and spectral onset-times. In contrast, the low-frequency region of the A1 had no significant sensitivity to the millisecond difference. The millisecond-range latency difference may have functional significance in the spectrotemporal integration with the millisecond time-scale sensitivity at the high-frequency region of A1 but not at the low-frequency region.
Collapse
Affiliation(s)
- Masataka Nishimura
- Department of Sensory and Cognitive Physiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjo, Kumamoto 8608556, Japan.
| | - Wen-Jie Song
- Department of Sensory and Cognitive Physiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjo, Kumamoto 8608556, Japan; Program for Leading Graduate Schools HIGO Program, Kumamoto University, Kumamoto, Japan
| |
Collapse
|
15
|
Gong B, Li Q, Zhao Y, Wu C. Auditory emotion recognition deficits in schizophrenia: A systematic review and meta-analysis. Asian J Psychiatr 2021; 65:102820. [PMID: 34482183 DOI: 10.1016/j.ajp.2021.102820] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/24/2021] [Indexed: 01/11/2023]
Abstract
BACKGROUND Auditory emotion recognition (AER) deficits refer to the abnormal identification and interpretation of tonal or prosodic features that transmit emotional information in sounds or speech. Evidence suggests that AER deficits are related to the pathology of schizophrenia. However, the effect size of the deficit in specific emotional category recognition in schizophrenia and its association with psychotic symptoms have never been evaluated through a meta-analysis. METHODS A systematic search for literature published in English or Chinese until November 30, 2020 was conducted in PubMed, Embase, Web of Science, PsychINFO, and China National Knowledge Infrastructure (CNKI), WanFang and Weip Databases. AER differences between patients and healthy controls (HCs) were assessed by the standardized mean differences (SMDs). Subgroup analyses were conducted for the type of emotional stimuli and the diagnosis of schizophrenia or schizoaffective disorders (Sch/SchA). Meta-regression analyses were performed to assess the influence of patients' age, sex, illness duration, antipsychotic dose, positive and negative symptoms on the study SMDs. RESULTS Eighteen studies containing 615 psychosis (Sch/SchA) and 488 HCs were included in the meta-analysis. Patients exhibited moderate deficits in recognizing the neutral, happy, sad, angry, fear, disgust, and surprising emotion. Neither the semantic information in the auditory stimuli nor the diagnosis subtype affected AER deficits in schizophrenia. Sadness, anger, and disgust AER deficits were each positively associated with negative symptoms in schizophrenia. CONCLUSIONS Patients with schizophrenia have moderate AER deficits, which were associated with negative symptoms. Rehabilitation focusing on improving AER abilities may help improve negative symptoms and the long-term prognosis of schizophrenia.
Collapse
Affiliation(s)
- Bingyan Gong
- Peking University School of Nursing, Beijing 100191, China
| | - Qiuhong Li
- Peking University School of Nursing, Beijing 100191, China
| | - Yiran Zhao
- Peking University School of Nursing, Beijing 100191, China
| | - Chao Wu
- Peking University School of Nursing, Beijing 100191, China.
| |
Collapse
|
16
|
Durfee AZ, Sheppard SM, Blake ML, Hillis AE. Lesion loci of impaired affective prosody: A systematic review of evidence from stroke. Brain Cogn 2021; 152:105759. [PMID: 34118500 PMCID: PMC8324538 DOI: 10.1016/j.bandc.2021.105759] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 05/06/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Affective prosody, or the changes in rate, rhythm, pitch, and loudness that convey emotion, has long been implicated as a function of the right hemisphere (RH), yet there is a dearth of literature identifying the specific neural regions associated with its processing. The current systematic review aimed to evaluate the evidence on affective prosody localization in the RH. One hundred and ninety articles from 1970 to February 2020 investigating affective prosody comprehension and production in patients with focal brain damage were identified via database searches. Eleven articles met inclusion criteria, passed quality reviews, and were analyzed for affective prosody localization. Acute, subacute, and chronic lesions demonstrated similar profile characteristics. Localized right antero-superior (i.e., dorsal stream) regions contributed to affective prosody production impairments, whereas damage to more postero-lateral (i.e., ventral stream) regions resulted in affective prosody comprehension deficits. This review provides support that distinct RH regions are vital for affective prosody comprehension and production, aligning with literature reporting RH activation for affective prosody processing in healthy adults as well. The impact of study design on resulting interpretations is discussed.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States.
| | - Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences and Disorders, Chapman University Crean College of Health and Behavioral Sciences, Irvine, CA 92618, United States
| | - Margaret L Blake
- Department of Communication Sciences and Disorders, University of Houston College of Liberal Arts and Social Sciences, Houston, TX 77204, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
17
|
Multiple prosodic meanings are conveyed through separate pitch ranges: Evidence from perception of focus and surprise in Mandarin Chinese. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:1164-1175. [PMID: 34331268 DOI: 10.3758/s13415-021-00930-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 11/08/2022]
Abstract
F0 variation is a crucial feature in speech prosody, which can convey linguistic information such as focus and paralinguistic meanings such as surprise. How can multiple layers of information be represented with F0 in speech: are they divided into discrete layers of pitch or overlapped without clear divisions? We investigated this question by assessing pitch perception of focus and surprise in Mandarin Chinese. Seventeen native Mandarin listeners rated the strength of focus and surprise conveyed by the same set of synthetically manipulated sentences. An fMRI experiment was conducted to assess neural correlates of the listeners' perceptual response to the stimuli. The results showed that behaviourally, the perceptual threshold for focus was 3 semitones and that for surprise was 5 semitones above the baseline. Moreover, the pitch range of 5-12 semitones above the baseline signalled both focus and surprise, suggesting a considerable overlap between the two types of prosodic information within this range. The neuroimaging data positively correlated with the variations in behavioural data. Also, a ceiling effect was found as no significant behavioural differences or neural activities were shown after reaching a certain pitch level for the perception of focus and surprise respectively. Together, the results suggest that different layers of prosodic information are represented in F0 through different pitch ranges: paralinguistic information is represented at a pitch range beyond that used by linguistic information. Meanwhile, the representation of paralinguistic information is achieved without obscuring linguistic prosody, thus allowing F0 to represent the two layers of information in parallel.
Collapse
|
18
|
Chan HL, Low I, Chen LF, Chen YS, Chu IT, Hsieh JC. A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
Affiliation(s)
- Hui-Ling Chan
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Intan Low
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ian-Ting Chu
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Chuen Hsieh
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
19
|
Gergely A, Tóth K, Faragó T, Topál J. Is it all about the pitch? Acoustic determinants of dog-directed speech preference in domestic dogs, Canis familiaris. Anim Behav 2021. [DOI: 10.1016/j.anbehav.2021.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
20
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
21
|
Zacharia AA, Ahuja N, Kaur S, Sharma R. Frontal activation as a key for deciphering context congruity and valence during visual perception: An electrical neuroimaging study. Brain Cogn 2021; 150:105711. [PMID: 33774336 DOI: 10.1016/j.bandc.2021.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/20/2022]
Abstract
The object-context associations and the valence are two important stimulus attributes that influence visual perception. The current study investigates the neural sources associated with schema congruent and incongruent object-context associations within positive, negative, and neutral valence during an intermittent binocular rivalry task with simultaneous high-density EEG recording. Cortical sourceswere calculated using the sLORETA algorithm in 150 ms after stimulus onset (Stim + 150) and 400 ms before response (Resp-400) time windows. No significant difference in source activity was found between congruent and incongruent associations in any of the valence categories in the Stim + 150 ms window indicating that immediately after stimulus presentation the basic visual processing remains the same for both. In the Resp-400 ms window, different frontal regions showed higher activity for incongruent associations with different valence such as the superior frontal gyrus showed significantly higher activations for negative while the middle and medial frontal gyrus showed higher activations for neutral and finally, the inferior frontal gyrus showed higher activations for positive valence. Besides replicating the previous knowledge of frontal activations in response to context congruity, the current study provides further evidence for the sensitivity of the frontal lobe to the valence associated with the incongruent stimuli.
Collapse
Affiliation(s)
- Angel Anna Zacharia
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Navdeep Ahuja
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Simran Kaur
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Ratna Sharma
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India.
| |
Collapse
|
22
|
Right Broca's area is hyperactive in right-handed subjects during meditation: Possible clinical implications? Med Hypotheses 2021; 150:110556. [PMID: 33812300 DOI: 10.1016/j.mehy.2021.110556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 02/26/2021] [Indexed: 11/23/2022]
Abstract
Broca's area, conventionally located in left (categorical) hemisphere of brain, is responsible for integrating linguistic and non-linguistic processing however, functionality of its right homolog remains partly understood and explored. This perception is based on the fact that in 96% of right-handed individuals, who constitute 91% of human population, the left hemisphere is dominant or categorical hemisphere. Here, we introduce novel scientific-based hypothesis that the right homolog of Broca's region which we observed hyperactive during attention focused meditation, might further play an important role in patients with attention deficits and language and speech disorders. Meditation includes self-regulation practices that focus on attention and awareness to achieve better control on mental processes. The positron emission tomography of brain in twelve (12) apparently healthy male, right-handed long-term meditators showed that the right Broca's area was significantly hyperactive (p = 0.002) during Meditation vs. Baseline while there was only a subtle increase in the activity of left Broca's area. Our results suggest that hitherto partly explored and understood right homolog of the Broca's area (referred to as right Broca's area) may have some important role, especially during meditation which needs to be explored further.
Collapse
|
23
|
Arioli M, Ricciardi E, Cattaneo Z. Social cognition in the blind brain: A coordinate-based meta-analysis. Hum Brain Mapp 2020; 42:1243-1256. [PMID: 33320395 PMCID: PMC7927293 DOI: 10.1002/hbm.25289] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 10/05/2020] [Accepted: 10/31/2020] [Indexed: 01/04/2023] Open
Abstract
Social cognition skills are typically acquired on the basis of visual information (e.g., the observation of gaze, facial expressions, gestures). In light of this, a critical issue is whether and how the lack of visual experience affects neurocognitive mechanisms underlying social skills. This issue has been largely neglected in the literature on blindness, despite difficulties in social interactions may be particular salient in the life of blind individuals (especially children). Here we provide a meta-analysis of neuroimaging studies reporting brain activations associated to the representation of self and others' in early blind individuals and in sighted controls. Our results indicate that early blindness does not critically impact on the development of the "social brain," with social tasks performed on the basis of auditory or tactile information driving consistent activations in nodes of the action observation network, typically active during actual observation of others in sighted individuals. Interestingly though, activations along this network appeared more left-lateralized in the blind than in sighted participants. These results may have important implications for the development of specific training programs to improve social skills in blind children and young adults.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | | | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.,IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
24
|
Carrière M, Cassol H, Aubinet C, Panda R, Thibaut A, Larroque SK, Simon J, Martial C, Bahri MA, Chatelle C, Martens G, Chennu S, Laureys S, Gosseries O. Auditory localization should be considered as a sign of minimally conscious state based on multimodal findings. Brain Commun 2020; 2:fcaa195. [PMID: 33426527 PMCID: PMC7784043 DOI: 10.1093/braincomms/fcaa195] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 08/24/2020] [Accepted: 08/31/2020] [Indexed: 12/16/2022] Open
Abstract
Auditory localization (i.e. turning the head and/or the eyes towards an auditory stimulus) is often part of the clinical evaluation of patients recovering from coma. The objective of this study is to determine whether auditory localization could be considered as a new sign of minimally conscious state, using a multimodal approach. The presence of auditory localization and the clinical outcome at 2 years of follow-up were evaluated in 186 patients with severe brain injury, including 64 with unresponsive wakefulness syndrome, 28 in minimally conscious state minus, 71 in minimally conscious state plus and 23 who emerged from the minimally conscious state. Brain metabolism, functional connectivity and graph theory measures were investigated by means of 18F-fluorodeoxyglucose positron emission tomography, functional MRI and high-density electroencephalography in two subgroups of unresponsive patients, with and without auditory localization. These two subgroups were also compared to a subgroup of patients in minimally conscious state minus. Auditory localization was observed in 13% of unresponsive patients, 46% of patients in minimally conscious state minus, 62% of patients in minimally conscious state plus and 78% of patients who emerged from the minimally conscious state. The probability to observe an auditory localization increased along with the level of consciousness, and the presence of auditory localization could predict the level of consciousness. Patients with auditory localization had higher survival rates (at 2-year follow-up) than those without localization. Differences in brain function were found between unresponsive patients with and without auditory localization. Higher connectivity in unresponsive patients with auditory localization was measured between the fronto-parietal network and secondary visual areas, and in the alpha band electroencephalography network. Moreover, patients in minimally conscious state minus significantly differed from unresponsive patients without auditory localization in terms of brain metabolism and alpha network centrality, whereas no difference was found with unresponsive patients who presented auditory localization. Our multimodal findings suggest differences in brain function between unresponsive patients with and without auditory localization, which support our hypothesis that auditory localization should be considered as a new sign of minimally conscious state. Unresponsive patients showing auditory localization should therefore no longer be considered unresponsive but minimally conscious. This would have crucial consequences on these patients’ lives as it would directly impact the therapeutic orientation or end-of-life decisions usually taken based on the diagnosis.
Collapse
Affiliation(s)
- Manon Carrière
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Helena Cassol
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Charlène Aubinet
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Rajanikant Panda
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Aurore Thibaut
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Stephen K Larroque
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Jessica Simon
- Psychology and Neurosciences of Cognition PsyNCogn, University of Liège, 4000 Liège, Belgium
| | - Charlotte Martial
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Mohamed A Bahri
- GIGA-Cyclotron Research Centre-In Vivo Imaging, University of Liège, 4000 Liège, Belgium
| | - Camille Chatelle
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Géraldine Martens
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Srivas Chennu
- School of Computing, University of Kent, Chatam Maritime ME4 4AG, UK.,Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 OQQ, UK
| | - Steven Laureys
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| | - Olivia Gosseries
- Coma Science Group, GIGA-Consciousness, University of Liège, 4000 Liège, Belgium.,Centre du Cerveau2, University Hospital of Liège, 4000 Liège, Belgium
| |
Collapse
|
25
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
26
|
Murphy LE, Bachevalier J. Damage to Orbitofrontal Areas 12 and 13, but Not Area 14, Results in Blunted Attention and Arousal to Socioemotional Stimuli in Rhesus Macaques. Front Behav Neurosci 2020; 14:150. [PMID: 33093825 PMCID: PMC7506161 DOI: 10.3389/fnbeh.2020.00150] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 08/03/2020] [Indexed: 12/12/2022] Open
Abstract
An earlier study in monkeys indicated that lesions to the mid-portion of the ventral orbitofrontal cortex (OFC), including Walker’s areas 11 and 13 (OFC11/13), altered the spontaneous scanning of still pictures of primate faces (neutral and emotional) and the modulation of arousal. Yet, these conclusions were limited by several shortcomings, including the lesion approach, use of static rather than dynamic stimuli, and manual data analyses. To confirm and extend these earlier findings, we compared attention and arousal to social and nonsocial scenes in three groups of rhesus macaques with restricted lesions to one of three OFC areas (OFC12, OFC13, or OFC14) and a sham-operated control group using eye-tracking to capture scanning patterns, focal attention and pupil size. Animals with damage to the lateral OFC areas (OFC12 and OFC13) showed decreased attention specifically to the eyes of negative (threatening) social stimuli and increased arousal (increased pupil diameter) to positive social scenes. In contrast, animals with damage to the ventromedial OFC area (OFC14) displayed no differences in attention or arousal in the presence of social stimuli compared to controls. These findings support the notion that areas of the lateral OFC are critical for directing attention and modulating arousal to emotional social cues. Together with the existence of face-selective neurons in these lateral OFC areas, the data suggest that the lateral OFC may set the stage for multidimensional information processing related to faces and emotion and may be involved in social judgments.
Collapse
Affiliation(s)
- Lauren E Murphy
- Department of Psychology, Emory College of Arts and Sciences, Emory University, Atlanta, GA, United States
| | - Jocelyne Bachevalier
- Department of Psychology, Emory College of Arts and Sciences, Emory University, Atlanta, GA, United States.,Yerkes National Primate Research Center, Emory University, Atlanta, GA, United States
| |
Collapse
|
27
|
Kuiper JJ, Lin YH, Young IM, Bai MY, Briggs RG, Tanglay O, Fonseka RD, Hormovas J, Dhanaraj V, Conner AK, O'Neal CM, Sughrue ME. A parcellation-based model of the auditory network. Hear Res 2020; 396:108078. [PMID: 32961519 DOI: 10.1016/j.heares.2020.108078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/01/2020] [Accepted: 09/11/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The auditory network plays an important role in interaction with the environment. Multiple cortical areas, such as the inferior frontal gyrus, superior temporal gyrus and adjacent insula have been implicated in this processing. However, understanding of this network's connectivity has been devoid of tractography specificity. METHODS Using attention task-based functional magnetic resonance imaging (MRI) studies, an activation likelihood estimation (ALE) of the auditory network was generated. Regions of interest corresponding to the cortical parcellation scheme previously published under the Human Connectome Project were co-registered onto the ALE in the Montreal Neurological Institute coordinate space, and visually assessed for inclusion in the network. Diffusion spectrum MRI-based fiber tractography was performed to determine the structural connections between cortical parcellations comprising the network. RESULTS Fifteen cortical regions were found to be part of the auditory network: areas 44 and 8C, auditory area 1, 4, and 5, frontal operculum area 4, the lateral belt, medial belt and parabelt, parietal area F centromedian, perisylvian language area, retroinsular cortex, supplementary and cingulate eye field and the temporoparietal junction area 1. These regions showed consistent interconnections between adjacent parcellations. The frontal aslant tract was found to connect areas within the frontal lobe, while the arcuate fasciculus was found to connect the frontal and temporal lobe, and subcortical U-fibers were found to connect parcellations within the temporal area. Further studies may refine this model with the ultimate goal of clinical application.
Collapse
Affiliation(s)
- Joseph J Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Yueh-Hsin Lin
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | | | - Michael Y Bai
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Onur Tanglay
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - R Dineth Fonseka
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Jorge Hormovas
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Vukshitha Dhanaraj
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Michael E Sughrue
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia.
| |
Collapse
|
28
|
Zacharia AA, Ahuja N, Kaur S, Sharma R. State-dependent perception and perceptual reversals during intermittent binocular rivalry: An electrical neuroimaging study. Neurosci Lett 2020; 736:135252. [PMID: 32687954 DOI: 10.1016/j.neulet.2020.135252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 06/26/2020] [Accepted: 07/14/2020] [Indexed: 10/23/2022]
Abstract
The object-context relationship and valence are two important stimulus attributes that affect visual perception. Although previous studies reveal how these two factors affect visual perception individually, the interplay between valence with congruent or incongruent object-context associations during visual perception is explored scarcely. Further, what is perceived, is affected by the intrinsic state of the brain at the moment of appearance of the stimulus which could be assessed by EEG microstates. Hence, the current study was designed to explore how the pre-stimulus EEG microstate influences the perception of emotional congruent and incongruent stimuli as well as perceptual reversals and stability during an intermittent binocular rivalry. Results revealed the association of specific pre-stimulus microstates with the perception of neutral and negative congruent stimuli as well as perceptual reversals and stability. Electrical neuroimaging of these microstates showed higher activation in the precuneus and middle occipital gyrus preceding the perception of neutral congruent stimuli and lingual gyrus preceding the perception of negative congruent stimuli. Increased source activity in superior temporal gyrus and superior frontal gyrus was found preceding stability and lower activation in parahippocampal gyrus was observed preceding reversals. Together these results suggest that the pre-stimulus activation of areas involved in visual priming, retrieval, and semantics leads to congruent perception. Pre-stimulus DMN suppression was required for perceptual reversals whereas stability was accompanied by pre-stimulus activation of areas related to the specific nature of the stimulus. Therefore, we propose that in addition to stimulus attributes, the pre-stimulus intrinsic brain activity could be an important determinant of the performance.
Collapse
Affiliation(s)
- Angel Anna Zacharia
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi, 110029, India
| | - Navdeep Ahuja
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi, 110029, India
| | - Simran Kaur
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi, 110029, India
| | - Ratna Sharma
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi, 110029, India.
| |
Collapse
|
29
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
30
|
Lin SY, Lee CC, Chen YS, Kuo LW. Investigation of functional brain network reconfiguration during vocal emotional processing using graph-theoretical analysis. Soc Cogn Affect Neurosci 2020; 14:529-538. [PMID: 31157395 PMCID: PMC6545541 DOI: 10.1093/scan/nsz025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 03/11/2019] [Accepted: 04/02/2019] [Indexed: 12/12/2022] Open
Abstract
Vocal expression is essential for conveying the emotion during social interaction. Although vocal emotion has been explored in previous studies, little is known about how perception of different vocal emotional expressions modulates the functional brain network topology. In this study, we aimed to investigate the functional brain networks under different attributes of vocal emotion by graph-theoretical network analysis. Functional magnetic resonance imaging (fMRI) experiments were performed on 36 healthy participants. We utilized the Power-264 functional brain atlas to calculate the interregional functional connectivity (FC) from fMRI data under resting state and vocal stimuli at different arousal and valence levels. The orthogonal minimal spanning trees method was used for topological filtering. The paired-sample t-test with Bonferroni correction across all regions and arousal-valence levels were used for statistical comparisons. Our results show that brain network exhibits significantly altered network attributes at FC, nodal and global levels, especially under high-arousal or negative-valence vocal emotional stimuli. The alterations within/between well-known large-scale functional networks were also investigated. Through the present study, we have gained more insights into how comprehending emotional speech modulates brain networks. These findings may shed light on how the human brain processes emotional speech and how it distinguishes different emotional conditions.
Collapse
Affiliation(s)
- Shih-Yen Lin
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Chi-Chun Lee
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Li-Wei Kuo
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Institute of Medical Device and Imaging, National Taiwan University College of Medicine, Taipei, Taiwan
| |
Collapse
|
31
|
Gao J, Zhang D, Wang L, Wang W, Fan Y, Tang M, Zhang X, Lei X, Wang Y, Yang J, Zhang X. Altered Effective Connectivity in Schizophrenic Patients With Auditory Verbal Hallucinations: A Resting-State fMRI Study With Granger Causality Analysis. Front Psychiatry 2020; 11:575. [PMID: 32670108 PMCID: PMC7327618 DOI: 10.3389/fpsyt.2020.00575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
PURPOSE Auditory verbal hallucinations (AVH) are among the most common and prominent symptoms of schizophrenia. Although abnormal functional connectivity associated with AVH has been reported in multiple regions, the changes in information flow remain unclear. In this study, we aimed to elucidate causal influences related to AVH in key regions of auditory, language, and memory networks, by using Granger causality analysis (GCA). PATIENTS AND METHODS Eighteen patients with schizophrenia with AVH and eighteen matched patients without AVH who received resting-state fMRI scans were enrolled in the study. The bilateral superior temporal gyrus (STG), Broca's area, Wernicke's area, putamen, and hippocampus were selected as regions of interest. RESULTS Granger causality (GC) increased from Broca's area to the left STG, and decreased from the right homolog of Wernicke's area to the right homolog of Broca's area, and from the right STG to the right hippocampus in the AVH group compared with the non-AVH group. Correlation analysis showed that the normalized GC ratios from the left STG to Broca's area, from the left STG to the right homolog of Broca's area, and from the right STG to the right homolog of Broca's area were negatively correlated with severity of AVH, and the normalized GC ratios from Broca's area to the left hippocampus and from Broca's area to the right STG were positively correlated with severity of AVH. CONCLUSION Our findings indicate a causal influence of pivotal regions involving the auditory, language, and memory networks in schizophrenia with AVH, which provide a deeper understanding of the neural mechanisms underlying AVH.
Collapse
Affiliation(s)
- Jie Gao
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Dongsheng Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Lei Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Wei Wang
- Department of Psychiatry, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Yajuan Fan
- Department of Psychiatry, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Min Tang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xin Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xiaoyan Lei
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Yarong Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Jian Yang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Xiaoling Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| |
Collapse
|
32
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
33
|
Manno FAM, Lau C, Fernandez-Ruiz J, Manno SHC, Cheng SH, Barrios FA. The human amygdala disconnecting from auditory cortex preferentially discriminates musical sound of uncertain emotion by altering hemispheric weighting. Sci Rep 2019; 9:14787. [PMID: 31615998 PMCID: PMC6794305 DOI: 10.1038/s41598-019-50042-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/24/2019] [Indexed: 02/06/2023] Open
Abstract
How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl's gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.
Collapse
Affiliation(s)
- Francis A M Manno
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, New South Wales, Australia.
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Condon Lau
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Juan Fernandez-Ruiz
- Departamento de Fisiología, Facultad de Medicina, Universidad Nacional Autónoma de México, México City, 04510, Mexico
| | | | - Shuk Han Cheng
- Department of Biomedical Sciences, City University of Hong Kong, HKSAR, China
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
34
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
35
|
Neural underpinnings of numerical and spatial cognition: An fMRI meta-analysis of brain regions associated with symbolic number, arithmetic, and mental rotation. Neurosci Biobehav Rev 2019; 103:316-336. [DOI: 10.1016/j.neubiorev.2019.05.007] [Citation(s) in RCA: 87] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 05/03/2019] [Accepted: 05/09/2019] [Indexed: 11/20/2022]
|
36
|
Saffarian A, Shavaki YA, Shahidi GA, Jafari Z. Effect of Parkinson Disease on Emotion Perception Using the Persian Affective Voices Test. J Voice 2019; 33:580.e1-580.e9. [DOI: 10.1016/j.jvoice.2018.01.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 01/16/2018] [Indexed: 12/01/2022]
|
37
|
Macoir J, Laforce R, Wilson MA, Tremblay MP, Hudon C. The role of semantic memory in the recognition of emotional valence conveyed by written words. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:270-288. [PMID: 31088253 DOI: 10.1080/13825585.2019.1606890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The main goal of this study was to examine the role of semantic memory in the recognition of emotional valence conveyed by words. Eight participants presenting with the semantic variant of primary progressive aphasia (svPPA) and 33 healthy control participants were administered three tasks designed to investigate the formal association between the recognition of emotional valence conveyed by words and the lexical and semantic processing of these words. Results revealed that individuals with svPPA showed deficits in the recognition of negative emotional valence conveyed by words. Moreover, results evidenced that their performance in the recognition of emotional valence was better for correctly than for incorrectly retrieved lexical entries of words, while their performance was comparable for words that were correctly or incorrectly associated with semantic concepts. These results suggest that the recognition of emotional valence conveyed by words relies on the retrieval of lexical, but not semantic, representations of words.
Collapse
Affiliation(s)
- J Macoir
- Faculté de médecine, Département de réadaptation, Université Laval, Québec, QC, Canada.,Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada
| | - R Laforce
- Département des sciences neurologiques, Clinique Interdisciplinaire de Mémoire (CIME) du CHU de Québec, Québec, QC, Canada.,Faculté de médecine, Département de médecine, Université Laval, Québec, QC, Canada
| | - M A Wilson
- Faculté de médecine, Département de réadaptation, Université Laval, Québec, QC, Canada.,Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada
| | - M-P Tremblay
- Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada.,École de psychologie, Université Laval, Québec, QC, Canada
| | - C Hudon
- Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada.,École de psychologie, Université Laval, Québec, QC, Canada
| |
Collapse
|
38
|
Krestar ML, McLennan CT. Responses to Semantically Neutral Words in Varying Emotional Intonations. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:733-744. [PMID: 30950728 DOI: 10.1044/2018_jslhr-h-17-0428] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Recent research on perception of emotionally charged material has found both an "emotionality effect" in which participants respond differently to emotionally charged stimuli relative to neutral stimuli in some cognitive-linguistic tasks and a "negativity bias" in which participants respond differently to negatively charged stimuli relative to neutral and positively charged stimuli. The current study investigated young adult listeners' bias when responding to neutral-meaning words in 2 tasks that varied attention to emotional intonation. Method Half the participants completed a word identification task in which they were instructed to type a word they had heard presented binaurally through Sony stereo MDR-ZX100 headphones. The other half of the participants completed an intonation identification task in which they were instructed to use a SuperLab RB-740 button box to identify the emotional prosody of the same words over headphones. For both tasks, all auditory stimuli were semantically neutral words spoken in happy, sad, and neutral emotional intonations. Researchers measured percent correct and reaction time (RT) for each word in both tasks. Results In the word identification task, when identifying semantically neutral words spoken in happy, sad, and neutral intonations, listeners' RTs to words in a sad intonation were longer than RTs to words in a happy intonation. In the intonation identification task, when identifying the emotional intonation of the same words spoken in the same emotional tones of voice, listeners' RTs to words in a sad intonation were significantly faster than those in a neutral intonation. Conclusions Results demonstrate a potential attentional negativity bias for neutral words varying in emotional intonation. Such results support an attention-based theoretical account. In an intonation identification task, an advantage emerged for words in a negative (sad) intonation relative to words in a neutral intonation. Thus, current models of emotional speech should acknowledge the amount of attention to emotional content (i.e., prosody) necessary to complete a cognitive task, as it has the potential to bias processing.
Collapse
Affiliation(s)
- Maura L Krestar
- Department of Clinical Health Sciences, Texas A&M University-Kingsville
| | - Conor T McLennan
- Language Research Laboratory, Department of Psychology, Cleveland State University, OH
| |
Collapse
|
39
|
Briggs RG, Pryor DP, Conner AK, Nix CE, Milton CK, Kuiper JK, Palejwala AH, Sughrue ME. The Artery of Aphasia, A Uniquely Sensitive Posterior Temporal Middle Cerebral Artery Branch that Supplies Language Areas in the Brain: Anatomy and Report of Four Cases. World Neurosurg 2019; 126:e65-e76. [PMID: 30735868 DOI: 10.1016/j.wneu.2019.01.159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 01/14/2019] [Accepted: 01/17/2019] [Indexed: 10/27/2022]
Abstract
BACKGROUND Arterial disruption during brain surgery can cause devastating injuries to wide expanses of white and gray matter beyond the tumor resection cavity. Such damage may occur as a result of disrupting blood flow through en passage arteries. Identification of these arteries is critical to prevent unforeseen neurologic sequelae during brain tumor resection. In this study, we discuss one such artery, termed the artery of aphasia (AoA), which when disrupted can lead to receptive and expressive language deficits. METHODS We performed a retrospective review of all patients undergoing an awake craniotomy for resection of a glioma by the senior author from 2012 to 2018. Patients were included if they experienced language deficits secondary to postoperative infarction in the left posterior temporal lobe in the distribution of the AoA. The gross anatomy of the AoA was then compared with activation likelihood estimations of the auditory and semantic language networks using coordinate-based meta-analytic techniques. RESULTS We identified 4 patients with left-sided posterior temporal artery infarctions in the distribution of the AoA on diffusion-weighted magnetic resonance imaging. All 4 patients developed substantial expressive and receptive language deficits after surgery. Functional language improvement occurred in only 2/4 patients. Activation likelihood estimations localized parts of the auditory and semantic language networks in the distribution of the AoA. CONCLUSIONS The AoA is prone to blood flow disruption despite benign manipulation. Patients seem to have limited capacity for speech recovery after intraoperative ischemia in the distribution of this artery, which supplies parts of the auditory and semantic language networks.
Collapse
Affiliation(s)
- Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Dillon P Pryor
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Cameron E Nix
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Camille K Milton
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Joseph K Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Ali H Palejwala
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Michael E Sughrue
- Department of Neurosurgery, Prince of Wales Private Hospital, Sydney, Australia.
| |
Collapse
|
40
|
Liu P, Cole PM, Gilmore RO, Pérez-Edgar KE, Vigeant MC, Moriarty P, Scherf KS. Young children's neural processing of their mother's voice: An fMRI study. Neuropsychologia 2019; 122:11-19. [PMID: 30528586 PMCID: PMC6334756 DOI: 10.1016/j.neuropsychologia.2018.12.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 11/13/2018] [Accepted: 12/03/2018] [Indexed: 12/20/2022]
Abstract
In addition to semantic content, human speech carries paralinguistic information that conveys important social cues such as a speaker's identity. For young children, their own mothers' voice is one of the most salient vocal inputs in their daily environment. Indeed, qualities of mothers' voices are shown to contribute to children's social development. Our knowledge of how the mother's voice is processed at the neural level, however, is limited. This study investigated whether the voice of a mother modulates activation in the network of regions activated by the human voice in young children differently than the voice of an unfamiliar mother. We collected fMRI data from 32 typically developing 7- and 8-year-olds as they listened to natural speech produced by their mother and another child's mother. We used emotionally-varied natural speech stimuli to approximate the range of children's day-to-day experience. We individually-defined functional ROIs in children's voice-sensitive neural network and then independently investigated the extent to which activation in these regions is modulated by speaker identity. The bilateral posterior auditory cortex, superior temporal gyrus (STG), and inferior frontal gyrus (IFG) exhibit enhanced activation in response to the voice of one's own mother versus that of an unfamiliar mother. The findings indicate that children process the voice of their own mother uniquely, and pave the way for future studies of how social information processing contributes to the trajectory of child social development.
Collapse
Affiliation(s)
- Pan Liu
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Pamela M Cole
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA.
| | - Rick O Gilmore
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Koraly E Pérez-Edgar
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - Peter Moriarty
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - K Suzanne Scherf
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
41
|
Chen T, Becker B, Camilleri J, Wang L, Yu S, Eickhoff SB, Feng C. A domain-general brain network underlying emotional and cognitive interference processing: evidence from coordinate-based and functional connectivity meta-analyses. Brain Struct Funct 2018; 223:3813-3840. [PMID: 30083997 DOI: 10.1007/s00429-018-1727-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 07/31/2018] [Indexed: 02/05/2023]
Abstract
The inability to control or inhibit emotional distractors characterizes a range of psychiatric disorders. Despite the use of a variety of task paradigms to determine the mechanisms underlying the control of emotional interference, a precise characterization of the brain regions and networks that support emotional interference processing remains elusive. Here, we performed coordinate-based and functional connectivity meta-analyses to determine the brain networks underlying emotional interference. Paradigms addressing interference processing in the cognitive or emotional domain were included in the meta-analyses, particularly the Stroop, Flanker, and Simon tasks. Our results revealed a consistent involvement of the bilateral dorsal anterior cingulate cortex, anterior insula, left inferior frontal gyrus, and superior parietal lobule during emotional interference. Follow-up conjunction analyses identified correspondence in these regions between emotional and cognitive interference processing. Finally, the patterns of functional connectivity of these regions were examined using resting-state functional connectivity and meta-analytic connectivity modeling. These regions were strongly connected as a distributed system, primarily mapping onto fronto-parietal control, ventral attention, and dorsal attention networks. Together, the present findings indicate that a domain-general neural system is engaged across multiple types of interference processing and that regulating emotional and cognitive interference depends on interactions between large-scale distributed brain networks.
Collapse
Affiliation(s)
- Taolin Chen
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Benjamin Becker
- Clinical Hospital of the Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Julia Camilleri
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - Li Wang
- Collaborative Innovation Center of Assessment Toward Basic Education Quality, Beijing Normal University, Beijing, China
| | - Shuqi Yu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - Chunliang Feng
- College of Information Science and Technology, Beijing Normal University, Beijing, China. .,State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.
| |
Collapse
|
42
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
43
|
Zhang H, Chen X, Chen S, Li Y, Chen C, Long Q, Yuan J. Facial Expression Enhances Emotion Perception Compared to Vocal Prosody: Behavioral and fMRI Studies. Neurosci Bull 2018; 34:801-815. [PMID: 29740753 DOI: 10.1007/s12264-018-0231-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2017] [Accepted: 03/13/2018] [Indexed: 02/07/2023] Open
Abstract
Facial and vocal expressions are essential modalities mediating the perception of emotion and social communication. Nonetheless, currently little is known about how emotion perception and its neural substrates differ across facial expression and vocal prosody. To clarify this issue, functional MRI scans were acquired in Study 1, in which participants were asked to discriminate the valence of emotional expression (angry, happy or neutral) from facial, vocal, or bimodal stimuli. In Study 2, we used an affective priming task (unimodal materials as primers and bimodal materials as target) and participants were asked to rate the intensity, valence, and arousal of the targets. Study 1 showed higher accuracy and shorter response latencies in the facial than in the vocal modality for a happy expression. Whole-brain analysis showed enhanced activation during facial compared to vocal emotions in the inferior temporal-occipital regions. Region of interest analysis showed a higher percentage signal change for facial than for vocal anger in the superior temporal sulcus. Study 2 showed that facial relative to vocal priming of anger had a greater influence on perceived emotion for bimodal targets, irrespective of the target valence. These findings suggest that facial expression is associated with enhanced emotion perception compared to equivalent vocal prosodies.
Collapse
Affiliation(s)
- Heming Zhang
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China.,Key Laboratory of Modern Teaching Technology of the Ministry of Education, Shaanxi Normal University, Xi'an, 710062, China
| | - Shengdong Chen
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Yansong Li
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Changming Chen
- School of Educational Sciences, Xinyang Normal University, Xinyang, 464000, China
| | - Quanshan Long
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China.
| |
Collapse
|
44
|
Schepman A, Rodway P, Cornmell L, Smith B, de Sa SL, Borwick C, Belfon-Thompson E. Right-ear precedence and vocal emotion contagion: The role of the left hemisphere. Laterality 2018; 23:290-317. [DOI: 10.1080/1357650x.2017.1360902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Astrid Schepman
- Department of Psychology, University of Chester, Chester, UK
| | - Paul Rodway
- Department of Psychology, University of Chester, Chester, UK
| | - Louise Cornmell
- Department of Psychology, University of Chester, Chester, UK
| | - Bethany Smith
- Department of Psychology, University of Chester, Chester, UK
| | | | - Ciara Borwick
- Department of Psychology, University of Chester, Chester, UK
| | | |
Collapse
|
45
|
Mańkowska A, Harciarek M, Williamson JB, Heilman KM. The influence of rightward and leftward spatial deviations of spatial attention on emotional picture recognition. J Clin Exp Neuropsychol 2018; 40:951-962. [DOI: 10.1080/13803395.2018.1457138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Aleksandra Mańkowska
- Division of Clinical Psychology and Neuropsychology, Institute of Psychology, University of Gdańsk, Gdańsk, Poland
| | - Michał Harciarek
- Division of Clinical Psychology and Neuropsychology, Institute of Psychology, University of Gdańsk, Gdańsk, Poland
| | - John B. Williamson
- Department of Neurology, University of Florida College of Medicine, Gainesville, FL, USA
- Brain Rehabilitation Research Center, North Florida/South Georgia Veterans Affairs Medical Center, Gainesville, FL, USA
| | - Kenneth M. Heilman
- Department of Neurology, University of Florida College of Medicine, Gainesville, FL, USA
- Brain Rehabilitation Research Center, North Florida/South Georgia Veterans Affairs Medical Center, Gainesville, FL, USA
| |
Collapse
|
46
|
Shigeno S. The Effects of the Literal Meaning of Emotional Phrases on the Identification of Vocal Emotions. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:195-213. [PMID: 29080117 DOI: 10.1007/s10936-017-9526-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This study investigates the discrepancy between the literal emotional content of speech and emotional tone in the identification of speakers' vocal emotions in both the listeners' native language (Japanese), and in an unfamiliar language (random-spliced Japanese). Both experiments involve a "congruent condition," in which the emotion contained in the literal meaning of speech (words and phrases) was compatible with vocal emotion, and an "incongruent condition," in which these forms of emotional information were discordant. Results for Japanese indicated that performance in identifying emotions did not differ significantly between the congruent and incongruent conditions. However, the results for random-spliced Japanese indicated that vocal emotion was correctly identified more often in the congruent than in the incongruent condition. The different results for Japanese and random-spliced Japanese suggested that the literal meaning of emotional phrases influences the listener's perception of the speaker's emotion, and that Japanese participants could infer speakers' intended emotions in the incongruent condition.
Collapse
Affiliation(s)
- Sumi Shigeno
- Department of Psychology, College of Education, Psychology and Human Studies, Aoyama Gakuin University, Tokyo, 150-8366, Japan.
| |
Collapse
|
47
|
Adamaszek M, D'Agata F, Ferrucci R, Habas C, Keulen S, Kirkby KC, Leggio M, Mariën P, Molinari M, Moulton E, Orsi L, Van Overwalle F, Papadelis C, Priori A, Sacchetti B, Schutter DJ, Styliadis C, Verhoeven J. Consensus Paper: Cerebellum and Emotion. THE CEREBELLUM 2017; 16:552-576. [PMID: 27485952 DOI: 10.1007/s12311-016-0815-8] [Citation(s) in RCA: 340] [Impact Index Per Article: 48.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Over the past three decades, insights into the role of the cerebellum in emotional processing have substantially increased. Indeed, methodological refinements in cerebellar lesion studies and major technological advancements in the field of neuroscience are in particular responsible to an exponential growth of knowledge on the topic. It is timely to review the available data and to critically evaluate the current status of the role of the cerebellum in emotion and related domains. The main aim of this article is to present an overview of current facts and ongoing debates relating to clinical, neuroimaging, and neurophysiological findings on the role of the cerebellum in key aspects of emotion. Experts in the field of cerebellar research discuss the range of cerebellar contributions to emotion in nine topics. Topics include the role of the cerebellum in perception and recognition, forwarding and encoding of emotional information, and the experience and regulation of emotional states in relation to motor, cognitive, and social behaviors. In addition, perspectives including cerebellar involvement in emotional learning, pain, emotional aspects of speech, and neuropsychiatric aspects of the cerebellum in mood disorders are briefly discussed. Results of this consensus paper illustrate how theory and empirical research have converged to produce a composite picture of brain topography, physiology, and function that establishes the role of the cerebellum in many aspects of emotional processing.
Collapse
Affiliation(s)
- M Adamaszek
- Department of Clinical and Cognitive Neurorehabilitation, Klinik Bavaria Kreischa, An der Wolfsschlucht, 01731, Kreischa, Germany.
| | - F D'Agata
- Department of Neuroscience, University of Turin, Turin, Italy
| | - R Ferrucci
- Fondazione IRCCS Ca' Granda, Granada, Italy
- Università degli Studi di Milano, Milan, Italy
| | - C Habas
- Service de NeuroImagerie (NeuroImaging department) Centre Hospitalier national D'Ophtalmologie des 15/20, Paris, France
| | - S Keulen
- Department of Clinical and Experimental Neurolinguistics, CLIEN, Vrije Universiteit Brussel, Brussels, Belgium
- Center for Language and Cognition Groningen, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - K C Kirkby
- Psychiatry, School of Medicine, University of Tasmania, Hobart, Australia
| | - M Leggio
- I.R.C.C.S. Santa Lucia Foundation, Rome, Italy
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - P Mariën
- Department of Clinical and Experimental Neurolinguistics, CLIEN, Vrije Universiteit Brussel, Brussels, Belgium
- Department of Neurology and Memory Clinic, ZNA Middelheim Hospital, Antwerp, Belgium
| | - M Molinari
- I.R.C.C.S. Santa Lucia Foundation, Rome, Italy
| | - E Moulton
- P.A.I.N. Group, Center for Pain and the Brain, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - L Orsi
- Neurologic Division 1, Department of Neuroscience and Mental Health, Città della Salute e della Scienza di Torino, Turin, Italy
| | - F Van Overwalle
- Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel, Brussels, Belgium
| | - C Papadelis
- Fetal-Neonatal Neuroimaging and Developmental Center, Boston Children's Hospital, Boston, MA, USA
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - A Priori
- Fondazione IRCCS Ca' Granda, Granada, Italy
- Università degli Studi di Milano, Milan, Italy
- III Clinica Neurologica, Polo Ospedaliero San Paolo, San Paolo, Italy
| | - B Sacchetti
- Department of Neuroscience, Section of Physiology, University of Turin, Torino, Italy
| | - D J Schutter
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - C Styliadis
- Medical School, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - J Verhoeven
- Department of Language and Communication Science, City University, London, UK
- Computational Linguistics and Psycholinguistics Research Center (CLIPS), Universiteit Antwerpen, Antwerp, Belgium
| |
Collapse
|
48
|
Dasdemir Y, Yildirim E, Yildirim S. Analysis of functional brain connections for positive-negative emotions using phase locking value. Cogn Neurodyn 2017; 11:487-500. [PMID: 29147142 DOI: 10.1007/s11571-017-9447-z] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 05/24/2017] [Accepted: 07/06/2017] [Indexed: 01/01/2023] Open
Abstract
In this study, we investigate the brain networks during positive and negative emotions for different types of stimulus (audio only, video only and audio + video) in [Formula: see text], and [Formula: see text] bands in terms of phase locking value, a nonlinear method to study functional connectivity. Results show notable hemispheric lateralization as phase synchronization values between channels are significant and high in right hemisphere for all emotions. Left frontal electrodes are also found to have control over emotion in terms of functional connectivity. Besides significant inter-hemisphere phase locking values are observed between left and right frontal regions, specifically between left anterior frontal and right mid-frontal, inferior-frontal and anterior frontal regions; and also between left and right mid frontal regions. ANOVA analysis for stimulus types show that stimulus types are not separable for emotions having high valence. PLV values are significantly different only for negative emotions or neutral emotions between audio only/video only and audio only/audio + video stimuli. Finding no significant difference between video only and audio + video stimuli is interesting and might be interpreted as that video content is the most effective part of a stimulus.
Collapse
Affiliation(s)
- Yasar Dasdemir
- Computer Engineering Department, Iskenderun Technical University, Hatay, Turkey
| | - Esen Yildirim
- Electrical and Electronic Engineering Department, Adana Science and Technology University, Adana, Turkey
| | - Serdar Yildirim
- Computer Engineering Department, Adana Science and Technology University, Adana, Turkey
| |
Collapse
|
49
|
Nishida M, Korzeniewska A, Crone NE, Toyoda G, Nakai Y, Ofen N, Brown EC, Asano E. Brain network dynamics in the human articulatory loop. Clin Neurophysiol 2017. [PMID: 28622530 DOI: 10.1016/j.clinph.2017.05.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE The articulatory loop is a fundamental component of language function, involved in the short-term buffer of auditory information followed by its vocal reproduction. We characterized the network dynamics of the human articulatory loop, using invasive recording and stimulation. METHODS We measured high-gamma activity70-110 Hz recorded intracranially when patients with epilepsy either only listened to, or listened to and then reproduced two successive tones by humming. We also conducted network analyses, and analyzed behavioral responses to cortical stimulation. RESULTS Presentation of the initial tone elicited high-gamma augmentation bilaterally in the superior-temporal gyrus (STG) within 40ms, and in the precentral and inferior-frontal gyri (PCG and IFG) within 160ms after sound onset. During presentation of the second tone, high-gamma augmentation was reduced in STG but enhanced in IFG. The task requiring tone reproduction further enhanced high-gamma augmentation in PCG during and after sound presentation. Event-related causality (ERC) analysis revealed dominant flows within STG immediately after sound onset, followed by reciprocal interactions involving PCG and IFG. Measurement of cortico-cortical evoked-potentials (CCEPs) confirmed connectivity between distant high-gamma sites in the articulatory loop. High-frequency stimulation of precentral high-gamma sites in either hemisphere induced speech arrest, inability to control vocalization, or forced vocalization. Vocalization of tones was accompanied by high-gamma augmentation over larger extents of PCG. CONCLUSIONS Bilateral PCG rapidly and directly receives feed-forward signals from STG, and may promptly initiate motor planning including sub-vocal rehearsal for short-term buffering of auditory stimuli. Enhanced high-gamma augmentation in IFG during presentation of the second tone may reflect high-order processing of the tone sequence. SIGNIFICANCE The articulatory loop employs sustained reciprocal propagation of neural activity across a network of cortical sites with strong neurophysiological connectivity.
Collapse
Affiliation(s)
- Masaaki Nishida
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Anesthesiology, Hanyu General Hospital, Hanyu City, Saitama 348-8508, Japan
| | - Anna Korzeniewska
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21287, USA.
| | - Nathan E Crone
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Goichiro Toyoda
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Yasuo Nakai
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Noa Ofen
- Institute of Gerontology, Wayne State University, Detroit, MI 48202, USA; Department of Psychology, Wayne State University, Detroit, MI 48202, USA
| | - Erik C Brown
- Department of Neurosurgery, Oregon Health and Science University, Portland, OR, USA
| | - Eishi Asano
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Neurology, Children's Hospital of Michigan, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA.
| |
Collapse
|
50
|
A graded tractographic parcellation of the temporal lobe. Neuroimage 2017; 155:503-512. [PMID: 28411156 PMCID: PMC5518769 DOI: 10.1016/j.neuroimage.2017.04.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 04/06/2017] [Accepted: 04/06/2017] [Indexed: 02/06/2023] Open
Abstract
The temporal lobe has been implicated in multiple cognitive domains through lesion studies as well as cognitive neuroimaging research. There has been a recent increased interest in the structural and connective architecture that underlies these functions. However there has not yet been a comprehensive exploration of the patterns of connectivity that appear across the temporal lobe. This article uses a data driven, spectral reordering approach in order to understand the general axes of structural connectivity within the temporal lobe. Two important findings emerge from the study. Firstly, the temporal lobe's overarching patterns of connectivity are organised along two key structural axes: medial to lateral and anteroventral to posterodorsal, mirroring findings in the functional literature. Secondly, the connective organisation of the temporal lobe is graded and transitional; this is reminiscent of the original work of 19th Century neuroanatomists, who posited the existence of some regions which transitioned between one another in a graded fashion. While regions with unique connectivity exist, the boundaries between these are not always sharp. Instead there are zones of graded connectivity reflecting the influence and overlap of shared connectivity. A graded parcellation identified changes in connectivity across the temporal lobe Connective organisation of the temporal lobe was graded and transitional Two axes of organisation were found: medial-lateral and anterovental-posterodorsal While regions of distinct connectivity exist, their boundaries are not always sharp Zones of graded connectivity exist reflecting influence of shared connectivity
Collapse
|