1
|
Pant U, Frishkopf M, Park T, Norris CM, Papathanassoglou E. A Neurobiological Framework for the Therapeutic Potential of Music and Sound Interventions for Post-Traumatic Stress Symptoms in Critical Illness Survivors. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19053113. [PMID: 35270804 PMCID: PMC8910287 DOI: 10.3390/ijerph19053113] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 02/25/2022] [Accepted: 03/03/2022] [Indexed: 11/16/2022]
Abstract
Overview: Post traumatic stress disorder (PTSD) has emerged as a severely debilitating psychiatric disorder associated with critical illness. Little progress has been made in the treatment of post-intensive care unit (ICU) PTSD. Aim: To synthesize neurobiological evidence on the pathophysiology of PTSD and the brain areas involved, and to highlight the potential of music to treat post-ICU PTSD. Methods: Critical narrative review to elucidate an evidence-based neurobiological framework to inform the study of music interventions for PTSD post-ICU. Literature searches were performed in PubMed and CINAHL. The Scale for the Assessment of Narrative Review Articles (SANRA) guided reporting. Results: A dysfunctional HPA axis feedback loop, an increased amygdalic response, hippocampal atrophy, and a hypoactive prefrontal cortex contribute to PTSD symptoms. Playing or listening to music can stimulate neurogenesis and neuroplasticity, enhance brain recovery, and normalize stress response. Additionally, evidence supports effectiveness of music to improve coping and emotional regulation, decrease dissociation symptoms, reduce depression and anxiety levels, and overall reduce severity of PTSD symptoms. Conclusions: Despite the lack of music interventions for ICU survivors, music has the potential to help people suffering from PTSD by decreasing amygdala activity, improving hippocampal and prefrontal brain function, and balancing the HPA-axis.
Collapse
Affiliation(s)
- Usha Pant
- Faculty of Nursing, Edmonton Clinic Health Academy (ECHA), University of Alberta, 11405-87th Ave, Edmonton, AB T6G 1C9, Canada; (U.P.); (T.P.); (C.M.N.)
| | - Michael Frishkopf
- Department of Music, Faculty of Arts, University of Alberta, 3-98 Fine Arts Building, Edmonton, AB T6G 2C9, Canada;
- Faculty of Medicine and Dentistry, University of Alberta, Walter C. MacKenzie Health Sciences Centre, Edmonton, AB T6G 2R7, Canada
- Canadian Centre for Ethnomusicology (CCE), University of Alberta, 11204-89 Ave NW, Edmonton, AB T6G 2J4, Canada
| | - Tanya Park
- Faculty of Nursing, Edmonton Clinic Health Academy (ECHA), University of Alberta, 11405-87th Ave, Edmonton, AB T6G 1C9, Canada; (U.P.); (T.P.); (C.M.N.)
| | - Colleen M. Norris
- Faculty of Nursing, Edmonton Clinic Health Academy (ECHA), University of Alberta, 11405-87th Ave, Edmonton, AB T6G 1C9, Canada; (U.P.); (T.P.); (C.M.N.)
- Faculty of Medicine and Dentistry, University of Alberta, Walter C. MacKenzie Health Sciences Centre, Edmonton, AB T6G 2R7, Canada
- School of Public Health, University of Alberta, ECHA 4-081, 11405-87 Ave NW, Edmonton, AB T6G 1C9, Canada
- Cardiovascular Health and Stroke Strategic Clinical Network, Alberta Health Services Corporate Office Seventh Street Plaza 14th Floor, North Tower 10030-107 Street NW, Edmonton, AB T5J 3E4, Canada
| | - Elizabeth Papathanassoglou
- Faculty of Nursing, Edmonton Clinic Health Academy (ECHA), University of Alberta, 11405-87th Ave, Edmonton, AB T6G 1C9, Canada; (U.P.); (T.P.); (C.M.N.)
- Neurosciences Rehabilitation & Vision Strategic Clinical Network, Alberta Health Services Corporate Office Seventh Street Plaza 14th Floor, North Tower 10030-107 Street NW, Edmonton, AB T5J 3E4, Canada
- Correspondence:
| |
Collapse
|
2
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
3
|
Lee D, Koo KC, Chung BH, Lee KS. Pain relieving effect of music on patients during transrectal ultrasonography: A pilot study. Prostate Int 2021; 9:181-184. [PMID: 35059354 PMCID: PMC8740156 DOI: 10.1016/j.prnil.2021.04.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/23/2021] [Accepted: 04/11/2021] [Indexed: 11/15/2022] Open
Abstract
Background Patient discomfort is often inevitable during transrectal ultrasonography (TRUS), a widely used modality for evaluating benign prostate hyperplasia/lower urinary tract symptoms. Music has been suggested as a method of pain relief during urologic procedures. In this study, we investigated the effect of music on pain relief during TRUS. Methods In a pilot study conducted from March to June 2019, pain scores of 316 patients who underwent TRUS with or without music were quantified using the visual analog scale (VAS). One-to-one propensity score matching was performed by matching the subjects between the groups. Patients with hemorrhoids of grade ≥ III were excluded (n = 4). Results Among the 312 patients included in the study (VAS score = 3.3 ± 2.4), 177 listened to music during the procedure. There were significant differences in age, prostate-specific antigen, prostate volume, International Prostate Symptom Score symptom/life score, and VAS score between the music (+) and music (−) groups. After adjusting for relevant variables, VAS scores were significantly lower in male patients aged ≥65.0 years who underwent music intervention than in those who did not (1.5 ± 1.4 vs. 3.0 ± 1.4, p = 0.002). Conclusion Age was negatively associated with pain during TRUS, and music had a relieving effect on pain in patients aged ≥65.0 years. Our findings may help improve the quality of examinations in urologic outpatient offices.
Collapse
Affiliation(s)
| | | | | | - Kwang S. Lee
- Corresponding author. Department of Urology, Yonsei University College of Medicine, 211 Eonjuro, Gangnam-gu, 135-720 Seoul, Republic of Korea.
| |
Collapse
|
4
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
5
|
Martin-Saavedra JS, Ruiz-Sternberg AM. The effects of music listening on the management of pain in primary dysmenorrhea: A randomized controlled clinical trial. NORDIC JOURNAL OF MUSIC THERAPY 2020. [DOI: 10.1080/08098131.2020.1761867] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
| | - Angela Maria Ruiz-Sternberg
- Clinical Research Group, Escuela de Medicina y Ciencias de la Salud-Universidad del Rosario, Bogotá, Colombia
| |
Collapse
|
6
|
Affective auditory stimulus database: An expanded version of the International Affective Digitized Sounds (IADS-E). Behav Res Methods 2019. [PMID: 29520632 DOI: 10.3758/s13428-018-1027-6] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases-such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.
Collapse
|
7
|
Whitehead JC, Armony JL. Singing in the brain: Neural representation of music and voice as revealed by fMRI. Hum Brain Mapp 2018; 39:4913-4924. [PMID: 30120854 DOI: 10.1002/hbm.24333] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/25/2018] [Accepted: 07/15/2018] [Indexed: 12/13/2022] Open
Abstract
The ubiquity of music across cultures as a means of emotional expression, and its proposed evolutionary relation to speech, motivated researchers to attempt a characterization of its neural representation. Several neuroimaging studies have reported that specific regions in the anterior temporal lobe respond more strongly to music than to other auditory stimuli, including spoken voice. Nonetheless, because most studies have employed instrumental music, which has important acoustic distinctions from human voice, questions still exist as to the specificity of the observed "music-preferred" areas. Here, we sought to address this issue by testing 24 healthy young adults with fast, high-resolution fMRI, to record neural responses to a large and varied set of musical stimuli, which, critically, included a capella singing, as well as purely instrumental excerpts. Our results confirmed that music; vocal or instrumental, preferentially engaged regions in the superior STG, particularly in the anterior planum polare, bilaterally. In contrast, human voice, either spoken or sung, activated more strongly a large area along the superior temporal sulcus. Findings were consistent between univariate and multivariate analyses, as well as with the use of a "silent" sparse acquisition sequence that minimizes any potential influence of scanner noise on the resulting activations. Activity in music-preferred regions could not be accounted for by any basic acoustic parameter tested, suggesting these areas integrate, likely in a nonlinear fashion, a combination of acoustic attributes that, together, result in the perceived musicality of the stimuli, consistent with proposed hierarchical processing of complex auditory information within the temporal lobes.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, Canada
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
8
|
Music is an effective intervention for the management of pain: An umbrella review. Complement Ther Clin Pract 2018; 32:103-114. [DOI: 10.1016/j.ctcp.2018.06.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 05/30/2018] [Accepted: 06/05/2018] [Indexed: 01/08/2023]
|
9
|
Martin-Saavedra JS, Vergara-Mendez LD, Pradilla I, Vélez-van-Meerbeke A, Talero-Gutiérrez C. Standardizing music characteristics for the management of pain: A systematic review and meta-analysis of clinical trials. Complement Ther Med 2018; 41:81-89. [PMID: 30477868 DOI: 10.1016/j.ctim.2018.07.008] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 06/04/2018] [Accepted: 07/10/2018] [Indexed: 11/25/2022] Open
Abstract
PURPOSE To evaluate if music characteristics like tempo, harmony, melody, instrumentation, volume, and pitch, as defined by musical theory, are described in randomized clinical trials (RCTs) evaluating the effects of music-listening on the quantified pain perception of adults, and if these characteristics influence music's overall therapeutic effect. METHODS A systematic review and meta-analysis of RCTs evaluating music-listening for pain management on adults was performed according to the Preferred Reporting Items for Systematic Review and Meta-Analyses statement. The databases Pubmed, Scopus, SCIELO, SpringerLink, Global Health Library, Cochrane, EMBASE, and LILACS were searched. Studies published between 2004 and 2017 with quantified measurements of pain were included. Quality was evaluated using the Scottish Intercollegiate Guidelines Network methodology checklist for RCT, and effect sizes were reported with standardized mean differences. RESULTS A total of 85 studies were included for qualitative analysis but only 56.47% described at least one music characteristic. Overall meta-analysis found a significant effect, with high heterogeneity, of music for pain management (SMD -0.59, I2 = 85%). Only instrumentation characteristics (lack of lyrics, of percussion or of nature sounds), and 60-80 bpm tempo were described sufficiently for analysis. All three instrumentation characteristics had significant effects, but only the lack of lyrics showed an acceptable heterogeneity. CONCLUSIONS Results show that music without lyrics is effective for the management of pain. Due to insufficient data, no ideal music characteristics for the management of pain were identified suggesting that music, as an intervention, needs standardization through an objective language such as that of music theory.
Collapse
Affiliation(s)
- Juan Sebastian Martin-Saavedra
- Clinical Research Group, Escuela de Medicina y Ciencias de la Salud - Universidad del Rosario, Carrera 24 # 63c-69, Bogotá D.C., Colombia.
| | - Laura Daniela Vergara-Mendez
- Neuroscience Reesearch group NeURos, Escuela de Medicina y Ciencias de la Salud - Universidad del Rosario, Bogotá D.C., Colombia.
| | - Iván Pradilla
- Neuroscience Reesearch group NeURos, Escuela de Medicina y Ciencias de la Salud - Universidad del Rosario, Bogotá D.C., Colombia.
| | - Alberto Vélez-van-Meerbeke
- Neuroscience Reesearch group NeURos, Escuela de Medicina y Ciencias de la Salud - Universidad del Rosario, Bogotá D.C., Colombia.
| | - Claudia Talero-Gutiérrez
- Neuroscience Reesearch group NeURos, Escuela de Medicina y Ciencias de la Salud - Universidad del Rosario, Bogotá D.C., Colombia.
| |
Collapse
|
10
|
Paquette S, Takerkart S, Saget S, Peretz I, Belin P. Cross-classification of musical and vocal emotions in the auditory cortex. Ann N Y Acad Sci 2018; 1423:329-337. [PMID: 29741242 DOI: 10.1111/nyas.13666] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 02/05/2018] [Accepted: 02/13/2018] [Indexed: 12/17/2022]
Abstract
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Sylvain Takerkart
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Shinji Saget
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Isabelle Peretz
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
| | - Pascal Belin
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
11
|
Mansouri FA, Acevedo N, Illipparampil R, Fehring DJ, Fitzgerald PB, Jaberzadeh S. Interactive effects of music and prefrontal cortex stimulation in modulating response inhibition. Sci Rep 2017; 7:18096. [PMID: 29273796 PMCID: PMC5741740 DOI: 10.1038/s41598-017-18119-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 12/06/2017] [Indexed: 12/30/2022] Open
Abstract
Influential hypotheses propose that alterations in emotional state influence decision processes and executive control of behavior. Both music and transcranial direct current stimulation (tDCS) of prefrontal cortex affect emotional state, however interactive effects of music and tDCS on executive functions remain unknown. Learning to inhibit inappropriate responses is an important aspect of executive control which is guided by assessing the decision outcomes such as errors. We found that high-tempo music, but not low-tempo music or low-level noise, significantly influenced learning and implementation of inhibitory control. In addition, a brief period of tDCS over prefrontal cortex specifically interacted with high-tempo music and altered its effects on executive functions. Measuring event-related autonomic and arousal response of participants indicated that exposure to task demands and practice led to a decline in arousal response to the decision outcome and high-tempo music enhanced such practice-related processes. However, tDCS specifically moderated the high-tempo music effect on the arousal response to errors and concomitantly restored learning and improvement in executive functions. Here, we show that tDCS and music interactively influence the learning and implementation of inhibitory control. Our findings indicate that alterations in the arousal-emotional response to the decision outcome might underlie these interactive effects.
Collapse
Affiliation(s)
- Farshad Alizadeh Mansouri
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia. .,ARC Centre of Excellence in Integrative Brain Function, Monash University, Victoria, Australia.
| | - Nicola Acevedo
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia
| | - Rosin Illipparampil
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia
| | - Daniel J Fehring
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia.,ARC Centre of Excellence in Integrative Brain Function, Monash University, Victoria, Australia
| | - Paul B Fitzgerald
- Monash Alfred Psychiatry Research Centre, Central Clinical School, Monash University and the Alfred Hospital, Victoria, Australia
| | - Shapour Jaberzadeh
- Department of Physiotherapy, Non-invasive Brain Stimulation & Neuroplasticity Laboratory, Monash University, Victoria, 3199, Australia
| |
Collapse
|
12
|
Nolden S, Rigoulot S, Jolicoeur P, Armony JL. Effects of musical expertise on oscillatory brain activity in response to emotional sounds. Neuropsychologia 2017; 103:96-105. [DOI: 10.1016/j.neuropsychologia.2017.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 10/19/2022]
|
13
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
14
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
15
|
Saliba J, Bortfeld H, Levitin DJ, Oghalai JS. Functional near-infrared spectroscopy for neuroimaging in cochlear implant recipients. Hear Res 2016; 338:64-75. [PMID: 26883143 DOI: 10.1016/j.heares.2016.02.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/18/2015] [Accepted: 02/12/2016] [Indexed: 10/22/2022]
Abstract
Functional neuroimaging can provide insight into the neurobiological factors that contribute to the variations in individual hearing outcomes following cochlear implantation. To date, measuring neural activity within the auditory cortex of cochlear implant (CI) recipients has been challenging, primarily because the use of traditional neuroimaging techniques is limited in people with CIs. Functional near-infrared spectroscopy (fNIRS) is an emerging technology that offers benefits in this population because it is non-invasive, compatible with CI devices, and not subject to electrical artifacts. However, there are important considerations to be made when using fNIRS to maximize the signal to noise ratio and to best identify meaningful cortical responses. This review considers these issues, the current data, and future directions for using fNIRS as a clinical application in individuals with CIs. This article is part of a Special Issue entitled <Annual Reviews 2016>.
Collapse
Affiliation(s)
- Joe Saliba
- Department of Otolaryngology - Head and Neck Surgery, Stanford University, Stanford, CA 94305, USA; Department of Otolaryngology - Head and Neck Surgery, McGill University, 1001 Boul. Decarie, Montreal, QC, Canada
| | - Heather Bortfeld
- Psychological Sciences, University of California-Merced, 5200 North Lake Road, Merced, CA 95343, USA
| | - Daniel J Levitin
- Department of Psychology, McGill University, 1205 Avenue Penfield, H3A 1B1, Montreal, QC, Canada
| | - John S Oghalai
- Department of Otolaryngology - Head and Neck Surgery, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
16
|
Peretz I, Vuvan D, Lagrois MÉ, Armony JL. Neural overlap in processing music and speech. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140090. [PMID: 25646513 DOI: 10.1098/rstb.2014.0090] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.
Collapse
Affiliation(s)
- Isabelle Peretz
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Dominique Vuvan
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Marie-Élaine Lagrois
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Jorge L Armony
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Quebec, Canada
| |
Collapse
|