1
|
Reed CN, Pearce M, McPherson A. Auditory imagery ability influences accuracy when singing with altered auditory feedback. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2024; 28:478-501. [PMID: 39219861 PMCID: PMC11357896 DOI: 10.1177/10298649231223077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
In this preliminary study, we explored the relationship between auditory imagery ability and the maintenance of tonal and temporal accuracy when singing and audiating with altered auditory feedback (AAF). Actively performing participants sang and audiated (sang mentally but not aloud) a self-selected piece in AAF conditions, including upward pitch-shifts and delayed auditory feedback (DAF), and with speech distraction. Participants with higher self-reported scores on the Bucknell Auditory Imagery Scale (BAIS) produced a tonal reference that was less disrupted by pitch shifts and speech distraction than musicians with lower scores. However, there was no observed effect of BAIS score on temporal deviation when singing with DAF. Auditory imagery ability was not related to the experience of having studied music theory formally, but was significantly related to the experience of performing. The significant effect of auditory imagery ability on tonal reference deviation remained even after partialling out the effect of experience of performing. The results indicate that auditory imagery ability plays a key role in maintaining an internal tonal center during singing but has at most a weak effect on temporal consistency. In this article, we outline future directions in understanding the multifaceted role of auditory imagery ability in singers' accuracy and expression.
Collapse
Affiliation(s)
- Courtney N. Reed
- Loughborough University London, UK; Queen Mary University of London, UK
| | | | - Andrew McPherson
- Imperial College London, UK; Queen Mary University of London, UK
| |
Collapse
|
2
|
Krüger B, Hegele M, Rieger M. The multisensory nature of human action imagery. PSYCHOLOGICAL RESEARCH 2024; 88:1870-1882. [PMID: 36441293 PMCID: PMC11315721 DOI: 10.1007/s00426-022-01771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022]
Abstract
Imagination can appeal to all our senses and may, therefore, manifest in very different qualities (e.g., visual, tactile, proprioceptive, or kinesthetic). One line of research addresses action imagery that refers to a process by which people imagine the execution of an action without actual body movements. In action imagery, visual and kinesthetic aspects of the imagined action are particularly important. However, other sensory modalities may also play a role. The purpose of the paper will be to address issues that include: (i) the creation of an action image, (ii) how the brain generates images of movements and actions, (iii) the richness and vividness of action images. We will further address possible causes that determine the sensory impression of an action image, like task specificity, instruction and experience. In the end, we will outline open questions and future directions.
Collapse
Affiliation(s)
- Britta Krüger
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany.
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University of Marburg and Justus Liebig University, Giessen, Germany
| | - Martina Rieger
- Institute for Psychology, UMIT Tirol-University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria
| |
Collapse
|
3
|
Whitton SA, Sreenan B, Jiang F. The contribution of auditory imagery and visual rhythm perception to sensorimotor synchronization with external and imagined rhythm. J Exp Psychol Gen 2024; 153:1861-1872. [PMID: 38695803 PMCID: PMC11250674 DOI: 10.1037/xge0001601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Sensorimotor synchronization (SMS) refers to the temporal coordination of an external stimulus with movement. Our previous work revealed that while SMS with visual flashing patterns was less consistent than with auditory or tactile patterns, it was still evident in a sample of nonmusicians. Although previous studies have speculated the potential role of auditory imagery, its contribution to visual SMS performance is not well quantified. Utilizing a synchronization-continuation finger-tapping task with a visual stimulus that included implied motion, we aimed to examine how participants' imagery ability, musicality, and rhythm perception affected SMS performance. We quantified participants' SMS consistency in synchronization (with visual cues) and continuation (without visual cues) phases. Participants also performed a perception task assessing their ability to detect temporal perturbations in the visual rhythm and completed musical ability and imagery questionnaires. Our linear regression model for SMS consistency included the trial phase, self-reported auditory imagery control and musicality, and visual rhythm perception as predictors. Significant effects of trial phase and auditory imagery scores on SMS consistency suggested that participants performed SMS more consistently while the guiding visual stimulus was present and that the higher one's self-reported auditory imagery ability, the better their SMS when continuing with unguided rhythm. One's visual rhythm perception accuracy significantly correlated with SMS consistency during the synchronization phase, and there was no correlation between rhythm perception and auditory imagery control. Overall, our results suggested relatively independent contributions of auditory imagery and visual rhythm perception to SMS with visual rhythm. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Fang Jiang
- Department of Psychology, University of Nevada
| |
Collapse
|
4
|
Della Vedova G, Proverbio AM. Neural signatures of imaginary motivational states: desire for music, movement and social play. Brain Topogr 2024:10.1007/s10548-024-01047-1. [PMID: 38625520 DOI: 10.1007/s10548-024-01047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/12/2024] [Indexed: 04/17/2024]
Abstract
The literature has demonstrated the potential for detecting accurate electrical signals that correspond to the will or intention to move, as well as decoding the thoughts of individuals who imagine houses, faces or objects. This investigation examines the presence of precise neural markers of imagined motivational states through the combining of electrophysiological and neuroimaging methods. 20 participants were instructed to vividly imagine the desire to move, listen to music or engage in social activities. Their EEG was recorded from 128 scalp sites and analysed using individual standardized Low-Resolution Brain Electromagnetic Tomographies (LORETAs) in the N400 time window (400-600 ms). The activation of 1056 voxels was examined in relation to the 3 motivational states. The most active dipoles were grouped in eight regions of interest (ROI), including Occipital, Temporal, Fusiform, Premotor, Frontal, OBF/IF, Parietal, and Limbic areas. The statistical analysis revealed that all motivational imaginary states engaged the right hemisphere more than the left hemisphere. Distinct markers were identified for the three motivational states. Specifically, the right temporal area was more relevant for "Social Play", the orbitofrontal/inferior frontal cortex for listening to music, and the left premotor cortex for the "Movement" desire. This outcome is encouraging in terms of the potential use of neural indicators in the realm of brain-computer interface, for interpreting the thoughts and desires of individuals with locked-in syndrome.
Collapse
Affiliation(s)
- Giada Della Vedova
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy
| | - Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy.
- NeuroMI, Milan Center for Neuroscience, Milan, Italy.
- Department of Psychology of University of Milano-Bicocca, Piazza dell'Ateneo nuovo 1, Milan, 20162, Italy.
| |
Collapse
|
5
|
Pounder Z, Eardley AF, Loveday C, Evans S. No clear evidence of a difference between individuals who self-report an absence of auditory imagery and typical imagers on auditory imagery tasks. PLoS One 2024; 19:e0300219. [PMID: 38568916 PMCID: PMC10990234 DOI: 10.1371/journal.pone.0300219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 02/25/2024] [Indexed: 04/05/2024] Open
Abstract
Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.
Collapse
Affiliation(s)
- Zoë Pounder
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Alison F. Eardley
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Catherine Loveday
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Samuel Evans
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Neuroimaging, King’s College London, London, United Kingdom
| |
Collapse
|
6
|
Gu J, Deng K, Luo X, Ma W, Tang X. Investigating the different mechanisms in related neural activities: a focus on auditory perception and imagery. Cereb Cortex 2024; 34:bhae139. [PMID: 38629796 DOI: 10.1093/cercor/bhae139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/19/2024] Open
Abstract
Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.
Collapse
Affiliation(s)
- Jin Gu
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Kexin Deng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xiaoqi Luo
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Wanli Ma
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xuegang Tang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| |
Collapse
|
7
|
Zhou L, Ma Y, Chen H, Han P. Sex-specific association between regional gray matter volume and spicy food craving or consumption. Appetite 2023; 190:107038. [PMID: 37690620 DOI: 10.1016/j.appet.2023.107038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 09/06/2023] [Accepted: 09/07/2023] [Indexed: 09/12/2023]
Abstract
Both food cravings and long-term food consumption have been associated with brain changes. Sex differences in food craving are robust and substantial. The current study examined the potential sex-specific neuroanatomical correlates of spicy food craving and habitual spicy food consumption. One hundred and forty-nine participants completed the Spicy Food Consumption Questionnaire and the Spicy Food Craving Questionnaire while their structural brain images were acquired using a 3-T scanner. Multiple regression analysis was used to examine regional gray matter volume (GMV) in relation to questionnaire scores. GMV of the right supplementary motor area (SMA) and the dorsal superior frontal gyrus were significantly correlated with spicy food craving in women, whereas spicy food craving was associated with greater GMV of the inferior temporal gyrus and the occipital gyrus in men. In addition, habitual spicy food consumption was correlated with increased GMV of the bilateral putamen, left postcentral gyrus, and right paracentral lobule, which was more pronounced among female participants. These findings suggest distinct central neuroanatomical reflections of trait craving or habitual exposure to spicy flavors. The sex-specific correlation between spicy food craving and brain anatomical features may be related to food-related sensory imagery or cognitive control.
Collapse
Affiliation(s)
- Luyi Zhou
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Yihang Ma
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Hong Chen
- Faculty of Psychology, Southwest University, Chongqing, China; MOE Key Laboratory of Cognition and Personality, Southwest University, Chongqing, China
| | - Pengfei Han
- Faculty of Psychology, Southwest University, Chongqing, China; MOE Key Laboratory of Cognition and Personality, Southwest University, Chongqing, China.
| |
Collapse
|
8
|
Lu L, Han M, Zou G, Zheng L, Gao JH. Common and distinct neural representations of imagined and perceived speech. Cereb Cortex 2022; 33:6486-6493. [PMID: 36587299 DOI: 10.1093/cercor/bhac519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 01/02/2023] Open
Abstract
Humans excel at constructing mental representations of speech streams in the absence of external auditory input: the internal experience of speech imagery. Elucidating the neural processes underlying speech imagery is critical to understanding this higher-order brain function in humans. Here, using functional magnetic resonance imaging, we investigated the shared and distinct neural correlates of imagined and perceived speech by asking participants to listen to poems articulated by a male voice (perception condition) and to imagine hearing poems spoken by that same voice (imagery condition). We found that compared to baseline, speech imagery and perception activated overlapping brain regions, including the bilateral superior temporal gyri and supplementary motor areas. The left inferior frontal gyrus was more strongly activated by speech imagery than by speech perception, suggesting functional specialization for generating speech imagery. Although more research with a larger sample size and a direct behavioral indicator is needed to clarify the neural systems underlying the construction of complex speech imagery, this study provides valuable insights into the neural mechanisms of the closely associated but functionally distinct processes of speech imagery and perception.
Collapse
Affiliation(s)
- Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Meizhen Han
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Guangyuan Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China.,National Biomedical Imaging Center, Peking University, Beijing 100871, China
| |
Collapse
|
9
|
Lambert AJ, Sibley CG. On the importance of consistent terminology for describing sensory imagery and its absence: A response to Monzel et al. (2022). Cortex 2022; 152:153-156. [DOI: 10.1016/j.cortex.2022.03.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 03/21/2022] [Accepted: 03/22/2022] [Indexed: 11/24/2022]
|
10
|
Simner J, Koursarou S, Rinaldi LJ, Ward J. Attention, flexibility, and imagery in misophonia: Does attention exacerbate everyday disliking of sound? J Clin Exp Neuropsychol 2022; 43:1006-1017. [PMID: 35331082 DOI: 10.1080/13803395.2022.2056581] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Misophonia is an unusually strong aversion to everyday sounds, such as chewing, crunching, or breathing. Here, we ask whether misophonia might be tied to an unusual profile of attention (and related traits), which serves to substantially heighten an otherwise everyday disliking of sounds. METHODS In Study 1, we tested 136 misophonics and 203 non-misophonics on self-report measures of attention to detail, cognitive inflexibility, and auditory imagery, as well as collecting details about their misophonia. In Study 2, we administered the Embedded Figures task to 20 misophonics and 36 non-misophonics. RESULTS We first showed that the degree to which sounds trigger misophonia reflects the pattern by which they are (more mildly) disliked by everyone. This suggests that misophonia is scaffolded onto existing mechanisms rather than qualitatively different ones. Compared to non-misophonics, we also found that misophonics self-reported greater attention to detail, cognitive inflexibility, and auditory imagery. As their symptoms worsen, they also become more accurate in an attentional task (Embedded Figures). CONCLUSIONS Our findings provide a better understanding of misophonia and support the hypothesis that dispositional traits of attention to detail may be key to elevating everyday disliking of sound into the more troubling aversions of misophonia.
Collapse
Affiliation(s)
- J Simner
- School of Psychology, University of Sussex, England
| | - S Koursarou
- School of Psychology, University of Sussex, England
| | - L J Rinaldi
- School of Psychology, University of Sussex, England
| | - J Ward
- School of Psychology, University of Sussex, England
| |
Collapse
|
11
|
Hinwar RP, Lambert AJ. Anauralia: The Silent Mind and Its Association With Aphantasia. Front Psychol 2021; 12:744213. [PMID: 34721222 PMCID: PMC8551557 DOI: 10.3389/fpsyg.2021.744213] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
Auditory and visual imagery were studied in a sample of 128 participants, including 34 self-reported aphantasics. Auditory imagery (Bucknell Auditory Imagery Scale-Vividness, BAIS-V) and visual imagery (Vividness of Visual Imagery Questionnaire-Modified, VVIQ-M) were strongly associated, Spearman's rho = 0.83: Most self-reported aphantasics also reported weak or entirely absent auditory imagery; and participants lacking auditory imagery tended to be aphantasic. Similarly, vivid visual imagery tended to co-occur with vivid auditory imagery. Nevertheless, the aphantasic group included one individual with typical auditory imagery; and the group lacking auditory imagery (N = 29) included one individual with typical visual imagery. Hence, weak visual and auditory imagery can dissociate, albeit with low apparent incidence. Auditory representations and auditory imagery are thought to play a key role in a wide range of psychological domains, including working memory and memory rehearsal, prospective cognition, thinking, reading, planning, problem-solving, self-regulation, and music. Therefore, self-reports describing an absence of auditory imagery raise a host of important questions concerning the role of phenomenal auditory imagery in these domains. Because there is currently no English word denoting an absence of auditory imagery, we propose a new term, anauralia, for referring to this, and offer suggestions for further research.
Collapse
Affiliation(s)
- Rish P Hinwar
- School of Psychology and Centre for Brain Research, University of Auckland, Auckland, New Zealand
| | - Anthony J Lambert
- School of Psychology and Centre for Brain Research, University of Auckland, Auckland, New Zealand
| |
Collapse
|
12
|
Marion G, Di Liberto GM, Shamma SA. The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics. J Neurosci 2021; 41:7435-7448. [PMID: 34341155 PMCID: PMC8412990 DOI: 10.1523/jneurosci.0183-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 06/23/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.
Collapse
Affiliation(s)
- Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Trinity Centre for Biomedical Engineering, Trinity College Institute of Neuroscience, Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity College, University of Dublin, D02 PN40, Dublin 2, Ireland
- School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, D04 V1W8, Dublin 4, Ireland
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, MD 20742
| |
Collapse
|
13
|
Dance CJ, Ward J, Simner J. What is the Link Between Mental Imagery and Sensory Sensitivity? Insights from Aphantasia. Perception 2021; 50:757-782. [PMID: 34463590 PMCID: PMC8438787 DOI: 10.1177/03010066211042186] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 08/05/2021] [Indexed: 12/16/2022]
Abstract
People with aphantasia have impoverished visual imagery so struggle to form mental pictures in the mind's eye. By testing people with and without aphantasia, we investigate the relationship between sensory imagery and sensory sensitivity (i.e., hyper- or hypo-reactivity to incoming signals through the sense organs). In Experiment 1 we first show that people with aphantasia report impaired imagery across multiple domains (e.g., olfactory, gustatory etc.) rather than simply vision. Importantly, we also show that imagery is related to sensory sensitivity: aphantasics reported not only lower imagery, but also lower sensory sensitivity. In Experiment 2, we showed a similar relationship between imagery and sensitivity in the general population. Finally, in Experiment 3 we found behavioural corroboration in a Pattern Glare Task, in which aphantasics experienced less visual discomfort and fewer visual distortions typically associated with sensory sensitivity. Our results suggest for the very first time that sensory imagery and sensory sensitivity are related, and that aphantasics are characterised by both lower imagery, and lower sensitivity. Our results also suggest that aphantasia (absence of visual imagery) may be more accurately defined as a subtype of a broader imagery deficit we name dysikonesia, in which weak or absent imagery occurs across multiple senses.
Collapse
Affiliation(s)
- C. J. Dance
- School of Psychology, University of Sussex, Brighton, UK
| | | | | |
Collapse
|
14
|
Individual differences in mental imagery in different modalities and levels of intentionality. Mem Cognit 2021; 50:29-44. [PMID: 34462893 PMCID: PMC8763825 DOI: 10.3758/s13421-021-01209-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2021] [Indexed: 11/08/2022]
Abstract
Mental imagery is a highly common component of everyday cognitive functioning. While substantial progress is being made in clarifying this fundamental human function, much is still unclear or unknown. A more comprehensive account of mental imagery aspects would be gained by examining individual differences in age, sex, and background experience in an activity and their association with imagery in different modalities and intentionality levels. The current online study combined multiple imagery self-report measures in a sample (n = 279) with a substantial age range (18-65 years), aiming to identify whether age, sex, or background experience in sports, music, or video games were associated with aspects of imagery in the visual, auditory, or motor stimulus modality and voluntary or involuntary intentionality level. The findings show weak positive associations between age and increased vividness of voluntary auditory imagery and decreased involuntary musical imagery frequency, weak associations between being female and more vivid visual imagery, and relations of greater music and video game experience with higher involuntary musical imagery frequency. Moreover, all imagery stimulus modalities were associated with each other, for both intentionality levels, except involuntary musical imagery frequency, which was only related to higher voluntary auditory imagery vividness. These results replicate previous research but also contribute new insights, showing that individual differences in age, sex, and background experience are associated with various aspects of imagery such as modality, intentionality, vividness, and frequency. The study's findings can inform the growing domain of applications of mental imagery to clinical and pedagogical settings.
Collapse
|
15
|
Teshima K, Ishida K, Nittono H. Auditory perceptual processing during musical imagery: An event-related potential study. Neurosci Lett 2021; 762:136148. [PMID: 34339803 DOI: 10.1016/j.neulet.2021.136148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 07/25/2021] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
The perceptual processing of a sound is facilitated when the sound matches auditory imagery. Previous studies have shown that auditory imagery and actual sound activate the auditory cortex in a similar fashion. To investigate whether auditory imagery is a modality-specific representation or an amodal representation, the current study examined how watching silent music videos affected the auditory processing of sound excerpts. Twenty university students were asked to form musical imagery of Japanese popular songs while watching the official music videos. Event-related brain potentials were recorded in response to short sound excerpts from the on-screen video or from a different video. The results showed that the amplitude of the exogenous N1 component (90-110 ms) was smaller for imagery-matched than for unmatched sound excerpts. The electrical source of the difference was estimated in the auditory cortex. After the N1, the matched excerpts elicited a larger late positive potential (400-800 ms) than the unmatched excerpts. These findings suggest that auditory imagery involves modality-specific neural processing and that imagery-matched sounds are processed efficiently at an early stage, inducing additional cognitive processing at a later stage.
Collapse
Affiliation(s)
- Konomi Teshima
- Graduate School of Human Sciences, Osaka University, Japan
| | - Kai Ishida
- Graduate School of Human Sciences, Osaka University, Japan
| | | |
Collapse
|
16
|
Pre-SMA activation and the perception of contagiousness and authenticity in laughter sounds. Cortex 2021; 143:57-68. [PMID: 34388558 DOI: 10.1016/j.cortex.2021.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/12/2021] [Accepted: 06/18/2021] [Indexed: 12/14/2022]
Abstract
Functional near-infrared spectroscopy and behavioural methods were used to examine the neural basis of the behavioural contagion and authenticity of laughter. We demonstrate that the processing of laughter sounds recruits networks previously shown to be related to empathy and auditory-motor mirror networks. Additionally, we found that the differences in the levels of activation in response to volitional and spontaneous laughter could predict an individual's perception of how contagious they found the laughter to be.
Collapse
|
17
|
Lima CF, Arriaga P, Anikin A, Pires AR, Frade S, Neves L, Scott SK. Authentic and posed emotional vocalizations trigger distinct facial responses. Cortex 2021; 141:280-292. [PMID: 34102411 DOI: 10.1016/j.cortex.2021.04.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
Collapse
Affiliation(s)
- César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France; Division of Cognitive Science, Lund University, Lund, Sweden
| | - Ana Rita Pires
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sofia Frade
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Leonor Neves
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
18
|
Fazekas P. Hallucinations as intensified forms of mind-wandering. Philos Trans R Soc Lond B Biol Sci 2020; 376:20190700. [PMID: 33308066 DOI: 10.1098/rstb.2019.0700] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
This paper argues for a novel way of thinking about hallucinations as intensified forms of mind-wandering. Starting from the observation that hallucinations are associated with hyperactive sensory areas underlying the content of hallucinatory experiences and a confusion with regard to the reality of the source of these experiences, the paper first reviews the different factors that might contribute to the impairment of reality monitoring. The paper then focuses on the sensory characteristics determining the vividness of an experience, reviews their relationship to the sensory hyperactivity observed in hallucinations, and investigates under what circumstances they can drive reality judgements. Finally, based on these considerations, the paper presents its main proposal according to which hallucinations are intensified forms of mind-wandering that are amplified along their sensory characteristics, and sketches a possible model of what factors might determine if an internally and involuntarily generated perceptual representation is experienced as a hallucination or as an instance of mind-wandering. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Peter Fazekas
- Centre for Philosophical Psychology, Universiteit Antwerpen, Antwerpen, Belgium.,Cognitive Neuroscience Research Unit, Centre of Functionally Integrative Neuroscience, Aarhus Universitet, Aarhus, Denmark
| |
Collapse
|
19
|
Finger tapping as a proxy for gait: Similar effects on movement variability during external and self-generated cueing in people with Parkinson's disease and healthy older adults. Ann Phys Rehabil Med 2020; 64:101402. [PMID: 32535169 DOI: 10.1016/j.rehab.2020.05.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 05/03/2020] [Accepted: 05/04/2020] [Indexed: 11/21/2022]
Abstract
BACKGROUND Rhythmic auditory cueing has been widely studied for gait rehabilitation in Parkinson's disease (PD). Our research group previously showed that externally generated cues (i.e., music) increased gait variability measures from uncued gait, whereas self-generated cues (i.e., mental singing) did not. These different effects may be due to differences in underlying neural mechanisms that could be discerned via neuroimaging; however, movement types that can be studied with neuroimaging are limited. OBJECTIVE The primary aim of the present study was to investigate the effects of different cue types on gait, finger tapping, and foot tapping, to determine whether tapping can be used as a surrogate for gait in future neuroimaging studies. The secondary aim of this study was to investigate whether rhythm skills or auditory imagery abilities are associated with responses to these different cue types. METHODS In this cross-sectional study, controls (n=24) and individuals with PD (n=33) performed gait, finger tapping, and foot tapping at their preferred pace (UNCUED) and to externally generated (MUSIC) and self-generated (MENTAL) cues. Spatiotemporal parameters of gait and temporal parameters of finger tapping and foot tapping were collected. The Beat Alignment Task (BAT) and Bucknell Auditory Imagery Scale (BAIS) were also administered. RESULTS The MUSIC cues elicited higher movement variability than did MENTAL cues across all movements. The MUSIC cues also elicited higher movement variability than the UNCUED condition for gait and finger tapping. CONCLUSIONS This study shows that different cue types affect gait and finger tapping similarly. Finger tapping may be an adequate proxy for gait in studying the underlying neural mechanisms of these cue types.
Collapse
|
20
|
Abstract
Individuals with autism spectrum disorder (ASD) reportedly possess preserved or superior music-processing skills compared to their typically developing counterparts. We examined auditory imagery and earworms (tunes that get "stuck" in the head) in adults with ASD and controls. Both groups completed a short earworm questionnaire together with the Bucknell Auditory Imagery Scale. Results showed poorer auditory imagery in the ASD group for all types of auditory imagery. However, the ASD group did not report fewer earworms than matched controls. These data suggest a possible basis in poor auditory imagery for poor prosody in ASD, but also highlight a separability between auditory imagery and control of musical memories. The separability is present in the ASD group but not in typically developing individuals.
Collapse
Affiliation(s)
- Alex Bacon
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Whiteknights, Reading, RG6 6AL, UK
| | - C Philip Beaman
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Whiteknights, Reading, RG6 6AL, UK.
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Whiteknights, Reading, RG6 6AL, UK
| |
Collapse
|
21
|
Gelding RW, Thompson WF, Johnson BW. Musical imagery depends upon coordination of auditory and sensorimotor brain activity. Sci Rep 2019; 9:16823. [PMID: 31727968 PMCID: PMC6856354 DOI: 10.1038/s41598-019-53260-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 10/28/2019] [Indexed: 11/09/2022] Open
Abstract
Recent magnetoencephalography (MEG) studies have established that sensorimotor brain rhythms are strongly modulated during mental imagery of musical beat and rhythm, suggesting that motor regions of the brain are important for temporal aspects of musical imagery. The present study examined whether these rhythms also play a role in non-temporal aspects of musical imagery including musical pitch. Brain function was measured with MEG from 19 healthy adults while they performed a validated musical pitch imagery task and two non-imagery control tasks with identical temporal characteristics. A 4-dipole source model probed activity in bilateral auditory and sensorimotor cortices. Significantly greater β-band modulation was found during imagery compared to control tasks of auditory perception and mental arithmetic. Imagery-induced β-modulation showed no significant differences between auditory and sensorimotor regions, which may reflect a tightly coordinated mode of communication between these areas. Directed connectivity analysis in the θ-band revealed that the left sensorimotor region drove left auditory region during imagery onset. These results add to the growing evidence that motor regions of the brain are involved in the top-down generation of musical imagery, and that imagery-like processes may be involved in musical perception.
Collapse
Affiliation(s)
- Rebecca W Gelding
- Department of Cognitive Science, Macquarie University, Sydney, NSW, 2109, Australia.
| | - William F Thompson
- Department of Psychology, Macquarie University, Sydney, NSW, 2109, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, Sydney, NSW, 2109, Australia
| |
Collapse
|
22
|
Martin S, Mikutta C, Leonard MK, Hungate D, Koelsch S, Shamma S, Chang EF, Millán JDR, Knight RT, Pasley BN. Neural Encoding of Auditory Features during Music Perception and Imagery. Cereb Cortex 2019; 28:4222-4233. [PMID: 29088345 DOI: 10.1093/cercor/bhx277] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Indexed: 11/12/2022] Open
Abstract
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.
Collapse
Affiliation(s)
- Stephanie Martin
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Christian Mikutta
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.,Translational Research Center and Division of Clinical Research Support, Psychiatric Services University of Bern (UPD), University Hospital of Psychiatry, Bern, Switzerland.,Department of Neurology, Inselspital, Bern, University Hospital, University of Bern, Bern, Switzerland
| | - Matthew K Leonard
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - Dylan Hungate
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | | | - Shihab Shamma
- Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Electrical and Computer Engineering & Institute for Systems Research, Univ. of Maryland in College Park, MD, USA
| | - Edward F Chang
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - José Del R Millán
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fe´de´rale de Lausanne, Lausanne, Switzerland
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.,Department of Psychology, University of California, Berkeley, CA, USA
| | - Brian N Pasley
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| |
Collapse
|
23
|
Correia AI, Branco P, Martins M, Reis AM, Martins N, Castro SL, Lima CF. Resting-state connectivity reveals a role for sensorimotor systems in vocal emotional processing in children. Neuroimage 2019; 201:116052. [DOI: 10.1016/j.neuroimage.2019.116052] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 07/19/2019] [Accepted: 07/23/2019] [Indexed: 11/17/2022] Open
|
24
|
Murphy C, Rueschemeyer SA, Smallwood J, Jefferies E. Imagining Sounds and Images: Decoding the Contribution of Unimodal and Transmodal Brain Regions to Semantic Retrieval in the Absence of Meaningful Input. J Cogn Neurosci 2019; 31:1599-1616. [DOI: 10.1162/jocn_a_01330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In the absence of sensory information, we can generate meaningful images and sounds from representations in memory. However, it remains unclear which neural systems underpin this process and whether tasks requiring the top–down generation of different kinds of features recruit similar or different neural networks. We asked people to internally generate the visual and auditory features of objects, either in isolation (car, dog) or in specific and complex meaning-based contexts (car/dog race). Using an fMRI decoding approach, in conjunction with functional connectivity analysis, we examined the role of auditory/visual cortex and transmodal brain regions. Conceptual retrieval in the absence of external input recruited sensory and transmodal cortex. The response in transmodal regions—including anterior middle temporal gyrus—was of equal magnitude for visual and auditory features yet nevertheless captured modality information in the pattern of response across voxels. In contrast, sensory regions showed greater activation for modality-relevant features in imagination (even when external inputs did not differ). These data are consistent with the view that transmodal regions support internally generated experiences and that they play a role in integrating perceptual features encoded in memory.
Collapse
|
25
|
Lu L, Wang Q, Sheng J, Liu Z, Qin L, Li L, Gao JH. Neural tracking of speech mental imagery during rhythmic inner counting. eLife 2019; 8:48971. [PMID: 31635693 PMCID: PMC6805153 DOI: 10.7554/elife.48971] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Accepted: 10/09/2019] [Indexed: 11/13/2022] Open
Abstract
The subjective inner experience of mental imagery is among the most ubiquitous human experiences in daily life. Elucidating the neural implementation underpinning the dynamic construction of mental imagery is critical to understanding high-order cognitive function in the human brain. Here, we applied a frequency-tagging method to isolate the top-down process of speech mental imagery from bottom-up sensory-driven activities and concurrently tracked the neural processing time scales corresponding to the two processes in human subjects. Notably, by estimating the source of the magnetoencephalography (MEG) signals, we identified isolated brain networks activated at the imagery-rate frequency. In contrast, more extensive brain regions in the auditory temporal cortex were activated at the stimulus-rate frequency. Furthermore, intracranial stereotactic electroencephalogram (sEEG) evidence confirmed the participation of the inferior frontal gyrus in generating speech mental imagery. Our results indicate that a disassociated neural network underlies the dynamic construction of speech mental imagery independent of auditory perception.
Collapse
Affiliation(s)
- Lingxi Lu
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Qian Wang
- Department of Clinical Neuropsychology, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Jingwei Sheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Zhaowei Liu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Department of Linguistics, The University of Hong Kong, Hong Kong, China
| | - Liang Li
- Speech and Hearing Research Center, School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Jia-Hong Gao
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
| |
Collapse
|
26
|
Leclerc MP, Kellermann T, Freiherr J, Clemens B, Habel U, Regenbogen C. Externalization Errors of Olfactory Source Monitoring in Healthy Controls-An fMRI Study. Chem Senses 2019; 44:593-606. [PMID: 31414135 DOI: 10.1093/chemse/bjz055] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Using a combined approach of functional magnetic resonance imaging (fMRI) and noninvasive brain stimulation (transcranial direct current stimulation [tDCS]), the present study investigated source memory and its link to mental imagery in the olfactory domain, as well as in the auditory domain. Source memory refers to the knowledge of the origin of mental experiences, differentiating events that have occurred and memories of imagined events. Because of a confusion between internally generated and externally perceived information, patients that are prone to hallucinations show decreased source memory accuracy; also, vivid mental imagery can lead to similar results in healthy controls. We tested source memory following cathodal tDCS stimulation using a mental imagery task, which required participants to perceive or imagine a set of the same olfactory and auditory stimuli during fMRI. The supplementary motor area (SMA) is involved in mental imagery across different modalities and potentially linked to source memory. Therefore, we attempted to modulate participants' SMA activation before entering the scanner using tDCS to influence source memory accuracy in healthy participants. Our results showed the same source memory accuracy between the olfactory and auditory modalities with no effects of stimulation. Finally, we found SMA's subregions differentially involved in olfactory and auditory imagery, with activation of dorsal SMA correlated with auditory source memory.
Collapse
Affiliation(s)
- Marcel P Leclerc
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen, Pauwelsstr, Aachen, Germany.,JARA-BRAIN, Pauwelsstr, Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen, Pauwelsstr, Aachen, Germany.,JARA-BRAIN, Pauwelsstr, Aachen, Germany
| | - Jessica Freiherr
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University, Pauwelsstr, Aachen, Germany.,Psychiatrische und Psychotherapeutische Klinik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Schwabachanlage, Erlangen, Germany
| | - Benjamin Clemens
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen, Pauwelsstr, Aachen, Germany.,JARA-BRAIN, Pauwelsstr, Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen, Pauwelsstr, Aachen, Germany.,JARA-BRAIN, Pauwelsstr, Aachen, Germany
| | - Christina Regenbogen
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen, Pauwelsstr, Aachen, Germany.,JARA-BRAIN, Pauwelsstr, Aachen, Germany.,Department of Clinical Neuroscience, Karolinska Institutet, Tomtebodavägen 18A,17177 Stockholm, Sweden
| |
Collapse
|
27
|
Ibáñez-Marcelo E, Campioni L, Phinyomark A, Petri G, Santarcangelo EL. Topology highlights mesoscopic functional equivalence between imagery and perception: The case of hypnotizability. Neuroimage 2019; 200:437-449. [DOI: 10.1016/j.neuroimage.2019.06.044] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 05/15/2019] [Accepted: 06/19/2019] [Indexed: 12/27/2022] Open
|
28
|
Gu J, Zhang H, Liu B, Li X, Wang P, Wang B. An investigation of the neural association between auditory imagery and perception of complex sounds. Brain Struct Funct 2019; 224:2925-2937. [PMID: 31468120 DOI: 10.1007/s00429-019-01948-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 08/23/2019] [Indexed: 01/24/2023]
Abstract
Neuroimaging studies have demonstrated that mental imagery and perception share similar neural substrates, however, there are still ambiguities according to different auditory imagery content. In addition, there is still a lack of information regarding the underlying neural correlation between the two modalities. In the present study, we adopted functional magnetic resonance imaging to explore the neural representation during imagery and perception of actual sounds in our surroundings. Univariate analysis was used to assess the differences between the modalities of average activation intensity, and stronger imagery activation was found in sensorimotor regions but weaker activation in auditory association cortices. Additionally, multi-voxel pattern analysis with a support vector machine classifier was implemented to decode environmental sounds within- or cross-modality. Significant above-chance accuracies were found in all overlapping regions in the classification of within-modality, while successful cross-modality classification only was found in sensorimotor regions. Both univariate and multivariate analyses found distinct representation between auditory imagery and perception in the overlapping regions, including superior temporal gyrus and inferior frontal sulcus as well as the precentral cortex and pre-supplementary motor area. Our results confirm the overlapping activation regions between auditory imagery and perception reported by previous studies and suggest that activation regions showed dissociable representation pattern in imagery and perception of sound categories.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Hairuo Zhang
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| |
Collapse
|
29
|
Pitch-specific contributions of auditory imagery and auditory memory in vocal pitch imitation. Atten Percept Psychophys 2019; 81:2473-2481. [PMID: 31286436 DOI: 10.3758/s13414-019-01799-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Vocal imitation guides both music and language development. Despite the developmental significance of this behavior, a sizable minority of individuals are inaccurate at vocal pitch imitation. Although previous research suggested that inaccurate pitch imitation results from deficient sensorimotor associations between pitch perception and vocal motor planning, the cognitive processes involved in sensorimotor translation are not clearly defined. In the present research, we investigated the roles of basic cognitive processes in the vocal imitation of pitch, as well as the degree to which these processes rely on pitch-specific resources. In the present study, participants completed a battery of pitch and verbal tasks to measure pitch perception, pitch and verbal auditory imagery, pitch and verbal auditory short-term memory, and pitch imitation ability. Information on participants' music background was collected, as well. Pitch imagery, pitch short-term memory, pitch discrimination ability, and musical experience were unique predictors of pitch imitation ability. Furthermore, pitch imagery was a partial mediator of the relationship between pitch short-term memory and pitch imitation ability. These results indicate that vocal imitation recruits cognitive processes that rely on at least partially separate neural resources for pitch and verbal representations.
Collapse
|
30
|
Viding E, McCrory E. Towards understanding atypical social affiliation in psychopathy. Lancet Psychiatry 2019; 6:437-444. [PMID: 31006435 DOI: 10.1016/s2215-0366(19)30049-5] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 01/13/2019] [Accepted: 01/22/2019] [Indexed: 12/24/2022]
Abstract
One distinctive feature of individuals with psychopathy is their reduced motivation and capacity to develop authentic social relationships, which are founded on an enjoyment of prosocial interactions or concern for others. Surprisingly, potential neurocognitive vulnerabilities contributing to atypical social affiliation, and lack of prosocial behaviours in psychopathy, have yet to be systematically investigated. Research efforts have largely focused on how individuals with psychopathy process negative emotions, and how this might affect their capacity to feel guilt or empathise with others' distress. Here, we propose a framework for understanding the development of atypical social affiliation and attachment in psychopathy, and outline several key processes and neural systems speculated to underpin them. We then describe present neurocognitive findings that suggest that these processes and neural systems are compromised in individuals with, or at risk of developing, psychopathy. Finally, we consider several research directions that would help with the understanding of the origin and development of social affiliation in individuals with psychopathy. This work has the potential to inform and enhance prevention and treatment strategies.
Collapse
Affiliation(s)
- Essi Viding
- Division of Psychology and Language Sciences, University College London, London, UK.
| | - Eamon McCrory
- Division of Psychology and Language Sciences, University College London, London, UK; Anna Freud National Centre for Children and Families, London, UK
| |
Collapse
|
31
|
Kowalski J, Wypych M, Marchewka A, Dragan M. Neural Correlates of Cognitive-Attentional Syndrome: An fMRI Study on Repetitive Negative Thinking Induction and Resting State Functional Connectivity. Front Psychol 2019; 10:648. [PMID: 30971987 PMCID: PMC6443848 DOI: 10.3389/fpsyg.2019.00648] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Accepted: 03/08/2019] [Indexed: 11/13/2022] Open
Abstract
Aim Cognitive-attentional syndrome (CAS) is the main factor underlying depressive and anxiety disorders in the metacognitive approach to psychopathology and psychotherapy. This study explore neural correlates of this syndrome during induced negative thinking, abstract thinking, and resting states. Methods n = 25 people with high levels of CAS and n = 33 people with low levels of CAS were chosen from a population-based sample (N = 1225). These groups filled-in a series of measures of CAS, negative affect, and psychopathology; they also underwent a modified rumination induction procedure and a resting state fMRI session. Resonance imaging data were analyzed using static general linear model and functional connectivity approaches. Results The two groups differed with large effect sizes on all used measures of CAS, negative affect, and psychopathology. We did not find any group differences in general linear model analyses. Functional connectivity analyses showed that high levels of CAS were related to disrupted patterns of connectivity within and between various brain networks: the default mode network, the salience network, and the central executive network. Conclusion We showed that low- and high-CAS groups differed in functional connectivity during induced negative and abstract thinking and also in resting state fMRI. Overall, our results suggest that people with high levels of CAS tend to have disrupted neural processing related to self-referential processing, task-oriented processing, and emotional processing.
Collapse
Affiliation(s)
| | - Marek Wypych
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | |
Collapse
|
32
|
Tiba AI, Manea L. The vividness of imagining emotional feelings in positive situations is attenuated in non-clinical dysphoria and predicts the experience of positive emotional feelings. J Clin Psychol 2018; 74:2238-2263. [PMID: 30014547 DOI: 10.1002/jclp.22676] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 05/31/2018] [Accepted: 06/22/2018] [Indexed: 11/08/2022]
Abstract
OBJECTIVE The vividness of imagining emotional feelings in positive situations (EFP) in non-clinically dysphoric and non-dysphoric individuals and its relation to dysphoric and positive feelings was examined. METHOD Participants were university students in Study 1 (N = 106, 84 women; 18-45 years), in Study 2 (N = 43, 39 women; 20-47 years), in Study 3 (N = 109, 92 women; 18-50 years) who filled out a set of questionnaires assessing depressive symptoms, cognition measures, and then completed an affective imagery task, using a cross-sectional design. RESULTS Non-clinically dysphoric participants imagined less vividly EFP than non-dysphoric participants. The vividness of imagining EFP accounted for group differences in positive feelings beyond positive and negative cognition and negative mood. CONCLUSIONS In addition to deficits in the general imagery of positive events, the attenuation of vividness of EFP in non-clinical dysphoric individuals warrants attention as a separate pathway by which non-clinically dysphoric individuals develop deficiencies of conscious positive feelings.
Collapse
Affiliation(s)
- Alexandru I Tiba
- Department of Psychology, University of Oradea, Oradea, Bihor, Romania
| | - Laura Manea
- The Hull York Medical School, University of York, York, UK
| |
Collapse
|
33
|
Beaman CP. The Literary and Recent Scientific History of the Earworm: A Review and Theoretical Framework. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/25742442.2018.1533735] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- C. Philip Beaman
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
34
|
Ghai S. Effects of Real-Time (Sonification) and Rhythmic Auditory Stimuli on Recovering Arm Function Post Stroke: A Systematic Review and Meta-Analysis. Front Neurol 2018; 9:488. [PMID: 30057563 PMCID: PMC6053522 DOI: 10.3389/fneur.2018.00488] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 06/05/2018] [Indexed: 01/15/2023] Open
Abstract
Background: External auditory stimuli have been widely used for recovering arm function post-stroke. Rhythmic and real-time auditory stimuli have been reported to enhance motor recovery by facilitating perceptuomotor representation, cross-modal processing, and neural plasticity. However, a consensus as to their influence for recovering arm function post-stroke is still warranted because of high variability noted in research methods. Objective: A systematic review and meta-analysis was carried out to analyze the effects of rhythmic and real-time auditory stimuli on arm recovery post stroke. Method: Systematic identification of published literature was performed according to PRISMA guidelines, from inception until December 2017, on online databases: Web of science, PEDro, EBSCO, MEDLINE, Cochrane, EMBASE, and PROQUEST. Studies were critically appraised using PEDro scale. Results: Of 1,889 records, 23 studies which involved 585 (226 females/359 males) patients met our inclusion criteria. The meta-analysis revealed beneficial effects of training with both types of auditory inputs for Fugl-Meyer assessment (Hedge's g: 0.79), Stroke impact scale (0.95), elbow range of motion (0.37), and reduction in wolf motor function time test (-0.55). Upon further comparison, a beneficial effect of real-time auditory feedback was found over rhythmic auditory cueing for Fugl-meyer assessment (1.3 as compared to 0.6). Moreover, the findings suggest a training dosage of 30 min to 1 h for at least 3-5 sessions per week with either of the auditory stimuli. Conclusion: This review suggests the application of external auditory stimuli for recovering arm functioning post-stroke.
Collapse
Affiliation(s)
- Shashank Ghai
- Institute for Sports Science, Leibniz University Hannover, Hannover, Germany
| |
Collapse
|
35
|
Lima CF, Anikin A, Monteiro AC, Scott SK, Castro SL. Automaticity in the recognition of nonverbal emotional vocalizations. ACTA ACUST UNITED AC 2018; 19:219-233. [PMID: 29792444 DOI: 10.1037/emo0000429] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to perceive the emotions of others is crucial for everyday social interactions. Important aspects of visual socioemotional processing, such as the recognition of facial expressions, are known to depend on largely automatic mechanisms. However, whether and how properties of automaticity extend to the auditory domain remains poorly understood. Here we ask if nonverbal auditory emotion recognition is a controlled deliberate or an automatic efficient process, using vocalizations such as laughter, crying, and screams. In a between-subjects design (N = 112), and covering eight emotions (four positive), we determined whether emotion recognition accuracy (a) is improved when participants actively deliberate about their responses (compared with when they respond as fast as possible) and (b) is impaired when they respond under low and high levels of cognitive load (concurrent task involving memorizing sequences of six or eight digits, respectively). Response latencies were also measured. Mixed-effects models revealed that recognition accuracy was high across emotions, and only minimally affected by deliberation and cognitive load; the benefits of deliberation and costs of cognitive load were significant mostly for positive emotions, notably amusement/laughter, and smaller or absent for negative ones; response latencies did not suffer under low or high cognitive load; and high recognition accuracy (approximately 90%) could be reached within 500 ms after the stimulus onset, with performance exceeding chance-level already between 300 and 360 ms. These findings indicate that key features of automaticity, namely fast and efficient/effortless processing, might be a modality-independent component of emotion recognition. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- César F Lima
- Faculty of Psychology and Education Sciences, University of Porto
| | - Andrey Anikin
- Division of Cognitive Science, Department of Philosophy, Lund University
| | | | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London
| | - São Luís Castro
- Faculty of Psychology and Education Sciences, University of Porto
| |
Collapse
|
36
|
Wallmark Z, Deblieck C, Iacoboni M. Neurophysiological Effects of Trait Empathy in Music Listening. Front Behav Neurosci 2018; 12:66. [PMID: 29681804 PMCID: PMC5897436 DOI: 10.3389/fnbeh.2018.00066] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Accepted: 03/21/2018] [Indexed: 12/30/2022] Open
Abstract
The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.
Collapse
Affiliation(s)
- Zachary Wallmark
- Meadows School of the Arts, Southern Methodist University, Dallas, TX, United States.,Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, United States
| | - Choi Deblieck
- Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, United States.,Academic Center for ECT and Neuromodulation, University Psychiatric Center, University of Leuven, Leuven, Belgium
| | - Marco Iacoboni
- Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, United States.,Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
37
|
Domain-specific reports of visual imagery vividness are not related to perceptual expertise. Behav Res Methods 2017; 49:733-738. [PMID: 27059364 DOI: 10.3758/s13428-016-0730-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Just as people vary in their perceptual expertise with a given domain, they also vary in their abilities to imagine objects. Visual imagery and perception share common mechanisms. However, it is unclear whether domain-specific expertise is relevant to visual imagery. Although the vividness of visual imagery is typically measured as a domain-general construct, a component of this vividness may be domain-specific. For example, individuals who have gained perceptual expertise with a specific domain might experience clearer mental images within this domain. Here we investigated whether perceptual expertise for cars relates to visual imagery vividness in the same domain, by assessing the correlations between a widely used domain-general measure of visual imagery vividness (the Vividness of Visual Imagery Questionnaire; Marks in British Journal of Psychology, 64, 17-24, 1973), a new measure of visual imagery vividness specific to cars, and behavioral tests of car expertise. We found that domain-specific imagery relates most strongly to general imagery vividness and less strongly to self-reported expertise, while it does not relate to perceptual or semantic expertise.
Collapse
|
38
|
Musical Imagery Involves Wernicke's Area in Bilateral and Anti-Correlated Network Interactions in Musicians. Sci Rep 2017; 7:17066. [PMID: 29213104 PMCID: PMC5719057 DOI: 10.1038/s41598-017-17178-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 11/22/2017] [Indexed: 11/27/2022] Open
Abstract
Musical imagery is the human experience of imagining music without actually hearing it. The neural basis of this mental ability is unclear, especially for musicians capable of engaging in accurate and vivid musical imagery. Here, we created a visualization of an 8-minute symphony as a silent movie and used it as real-time cue for musicians to continuously imagine the music for repeated and synchronized sessions during functional magnetic resonance imaging (fMRI). The activations and networks evoked by musical imagery were compared with those elicited by the subjects directly listening to the same music. Musical imagery and musical perception resulted in overlapping activations at the anterolateral belt and Wernicke’s area, where the responses were correlated with the auditory features of the music. Whereas Wernicke’s area interacted within the intrinsic auditory network during musical perception, it was involved in much more complex networks during musical imagery, showing positive correlations with the dorsal attention network and the motor-control network and negative correlations with the default-mode network. Our results highlight the important role of Wernicke’s area in forming vivid musical imagery through bilateral and anti-correlated network interactions, challenging the conventional view of segregated and lateralized processing of music versus language.
Collapse
|
39
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation. Neuroimage 2017; 159:18-31. [PMID: 28669904 DOI: 10.1016/j.neuroimage.2017.06.053] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 06/20/2017] [Accepted: 06/21/2017] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; The Irish Longitudinal Study on Ageing (TILDA), Dept. Medical Gerontology, TCD, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, EC1M 6BQ, UK; Clinical Physics, Barts Health NHS Trust, London, EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, WC1N 3AR, UK.
| |
Collapse
|
40
|
O'Nions E, Lima CF, Scott SK, Roberts R, McCrory EJ, Viding E. Reduced Laughter Contagion in Boys at Risk for Psychopathy. Curr Biol 2017; 27:3049-3055.e4. [PMID: 28966092 PMCID: PMC5640510 DOI: 10.1016/j.cub.2017.08.062] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Revised: 07/21/2017] [Accepted: 08/24/2017] [Indexed: 01/02/2023]
Abstract
Humans are intrinsically social animals, forming enduring affiliative bonds [1]. However, a striking minority with psychopathic traits, who present with violent and antisocial behaviors, tend to value other people only insofar as they contribute to their own advancement [2, 3]. Extant research has addressed the neurocognitive processes associated with aggression in such individuals, but we know remarkably little about processes underlying their atypical social affiliation. This is surprising, given the importance of affiliation and bonding in promoting social order and reducing aggression [4, 5]. Human laughter engages brain areas that facilitate social reciprocity and emotional resonance, consistent with its established role in promoting affiliation and social cohesion [6, 7, 8]. We show that, compared with typically developing boys, those at risk for antisocial behavior in general (irrespective of their risk of psychopathy) display reduced neural response to laughter in the supplementary motor area, a premotor region thought to facilitate motor readiness to join in during social behavior [9, 10, 11]. Those at highest risk for developing psychopathy additionally show reduced neural responses to laughter in the anterior insula. This region is implicated in auditory-motor processing and in linking action tendencies with emotional experience and subjective feelings [10, 12, 13]. Furthermore, this same group reports reduced desire to join in with the laughter of others—a behavioral profile in part accounted for by the attenuated anterior insula response. These findings suggest that atypical processing of laughter could represent a novel mechanism that impoverishes social relationships and increases risk for psychopathy and antisocial behavior. Psychopathic traits are associated with a lack of enduring affiliative bonds Listening to human laughter engages brain areas that facilitate emotional resonance Boys at risk of psychopathy have reduced neural/behavioral responses to laughter This could reflect a mechanism underpinning reduced social connectedness
Collapse
Affiliation(s)
- Elizabeth O'Nions
- Division of Psychology and Language Sciences, Department of Clinical, Educational and Health Psychology, University College London, Bedford Way, London WC1H 0AP, UK; Faculty of Psychology and Educational Sciences, Parenting and Special Education Research Unit, KU Leuven, Leopold Vanderkelenstraat, Leuven, Belgium
| | - César F Lima
- Division of Psychology and Language Sciences, Institute of Cognitive Neuroscience, Queen Square, University College London, London WC1N 3AR, UK; Faculty of Psychology and Education Sciences, University of Porto, Rua Alfredo Allen, Porto, Portugal; Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Division of Psychology and Language Sciences, Institute of Cognitive Neuroscience, Queen Square, University College London, London WC1N 3AR, UK
| | - Ruth Roberts
- Division of Psychology and Language Sciences, Department of Clinical, Educational and Health Psychology, University College London, Bedford Way, London WC1H 0AP, UK
| | - Eamon J McCrory
- Division of Psychology and Language Sciences, Department of Clinical, Educational and Health Psychology, University College London, Bedford Way, London WC1H 0AP, UK
| | - Essi Viding
- Division of Psychology and Language Sciences, Department of Clinical, Educational and Health Psychology, University College London, Bedford Way, London WC1H 0AP, UK.
| |
Collapse
|
41
|
Evans S. What Has Replication Ever Done for Us? Insights from Neuroimaging of Speech Perception. Front Hum Neurosci 2017; 11:41. [PMID: 28203154 PMCID: PMC5285370 DOI: 10.3389/fnhum.2017.00041] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Accepted: 01/19/2017] [Indexed: 12/03/2022] Open
Affiliation(s)
- Samuel Evans
- Institute of Cognitive Neuroscience, University College LondonLondon UK; Department of Psychology, University of WestminsterLondon, UK
| |
Collapse
|
42
|
Lima CF, Krishnan S, Scott SK. Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery. Trends Neurosci 2016; 39:527-542. [PMID: 27381836 PMCID: PMC5441995 DOI: 10.1016/j.tins.2016.06.003] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 05/26/2016] [Accepted: 06/09/2016] [Indexed: 11/28/2022]
Abstract
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception. Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery tasks. Responses during auditory processing primarily peak in pre-SMA and in the boundary area between pre-SMA and SMA. This boundary area is crucially involved in the control of speech and vocal production, suggesting that sounds engage this region in an effector-specific manner. Activating sound-related motor representations in SMA and pre-SMA might facilitate behavioral responses to sounds. This might also support a flexible generation of sensory predictions based on previous experience to enable imagery and guide perception.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Saloni Krishnan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
43
|
Otten M, Mann L, van Berkum JJA, Jonas KJ. No laughing matter: How the presence of laughing witnesses changes the perception of insults. Soc Neurosci 2016; 12:182-193. [PMID: 26985787 DOI: 10.1080/17470919.2016.1162194] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Insults always sting, but the context in which they are delivered can make the effects even worse. Here we test how the brain processes insults, and whether and how the neurocognitive processing of insults is changed by the presence of a laughing crowd. Event-related potentials showed that insults, compared to compliments, evoked an increase in N400 amplitude (indicating increased lexical-semantic processing) and LPP amplitude (indicating emotional processing) when presented in isolation. When insults were perceived in the presence of a laughing crowd, the difference in N400 amplitude disappeared, while the difference in LPP activation increased. These results show that even without laughter, verbal insults receive additional neural processing over compliments, both at the lexical-semantic and emotional level. The presence of a laughing crowd has a direct effect on the neurocognitive processing of insults, leading to stronger and more elongated emotional processing.
Collapse
Affiliation(s)
- Marte Otten
- a Department of Social Psychology , University of Amsterdam , Amsterdam , The Netherlands.,b Department of Informatics , University of Sussex , Brighton , United Kingdom
| | - Liesbeth Mann
- a Department of Social Psychology , University of Amsterdam , Amsterdam , The Netherlands
| | | | - Kai J Jonas
- a Department of Social Psychology , University of Amsterdam , Amsterdam , The Netherlands
| |
Collapse
|