1
|
Arya R, Ervin B, Greiner HM, Buroker J, Byars AW, Tenney JR, Arthur TM, Fong SL, Lin N, Frink C, Rozhkov L, Scholle C, Skoch J, Leach JL, Mangano FT, Glauser TA, Hickok G, Holland KD. Emotional facial expression and perioral motor functions of the human auditory cortex. Clin Neurophysiol 2024; 163:102-111. [PMID: 38729074 PMCID: PMC11176009 DOI: 10.1016/j.clinph.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE We investigated the role of transverse temporal gyrus and adjacent cortex (TTG+) in facial expressions and perioral movements. METHODS In 31 patients undergoing stereo-electroencephalography monitoring, we describe behavioral responses elicited by electrical stimulation within the TTG+. Task-induced high-gamma modulation (HGM), auditory evoked responses, and resting-state connectivity were used to investigate the cortical sites having different types of responses on electrical stimulation. RESULTS Changes in facial expressions and perioral movements were elicited on electrical stimulation within TTG+ in 9 (29%) and 10 (32%) patients, respectively, in addition to the more common language responses (naming interruptions, auditory hallucinations, paraphasic errors). All functional sites showed auditory task induced HGM and evoked responses validating their location within the auditory cortex, however, motor sites showed lower peak amplitudes and longer peak latencies compared to language sites. Significant first-degree connections for motor sites included precentral, anterior cingulate, parahippocampal, and anterior insular gyri, whereas those for language sites included posterior superior temporal, posterior middle temporal, inferior frontal, supramarginal, and angular gyri. CONCLUSIONS Multimodal data suggests that TTG+ may participate in auditory-motor integration. SIGNIFICANCE TTG+ likely participates in facial expressions in response to emotional cues during an auditory discourse.
Collapse
Affiliation(s)
- Ravindra Arya
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA.
| | - Brian Ervin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hansel M Greiner
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jason Buroker
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Anna W Byars
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jeffrey R Tenney
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Todd M Arthur
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Susan L Fong
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nan Lin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Clayton Frink
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Leonid Rozhkov
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Craig Scholle
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Jesse Skoch
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - James L Leach
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neuro-radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Francesco T Mangano
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Tracy A Glauser
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Department of Language Science, University of California, Irvine, CA, USA
| | - Katherine D Holland
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
2
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
3
|
Higgins NC, Scurry AN, Jiang F, Little DF, Alain C, Elhilali M, Snyder JS. Adaptation in the sensory cortex drives bistable switching during auditory stream segregation. Neurosci Conscious 2023; 2023:niac019. [PMID: 36751309 PMCID: PMC9899071 DOI: 10.1093/nc/niac019] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/17/2022] [Accepted: 12/26/2022] [Indexed: 02/06/2023] Open
Abstract
Current theories of perception emphasize the role of neural adaptation, inhibitory competition, and noise as key components that lead to switches in perception. Supporting evidence comes from neurophysiological findings of specific neural signatures in modality-specific and supramodal brain areas that appear to be critical to switches in perception. We used functional magnetic resonance imaging to study brain activity around the time of switches in perception while participants listened to a bistable auditory stream segregation stimulus, which can be heard as one integrated stream of tones or two segregated streams of tones. The auditory thalamus showed more activity around the time of a switch from segregated to integrated compared to time periods of stable perception of integrated; in contrast, the rostral anterior cingulate cortex and the inferior parietal lobule showed more activity around the time of a switch from integrated to segregated compared to time periods of stable perception of segregated streams, consistent with prior findings of asymmetries in brain activity depending on the switch direction. In sound-responsive areas in the auditory cortex, neural activity increased in strength preceding switches in perception and declined in strength over time following switches in perception. Such dynamics in the auditory cortex are consistent with the role of adaptation proposed by computational models of visual and auditory bistable switching, whereby the strength of neural activity decreases following a switch in perception, which eventually destabilizes the current percept enough to lead to a switch to an alternative percept.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Communication Sciences and Disorders, University of South Florida, 4202 E. Fowler Avenue, PCD1017, Tampa, FL 33620, USA
| | - Alexandra N Scurry
- Department of Psychology, University of Nevada, 1664 N. Virginia Street Mail Stop 0296, Reno, NV 89557, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, 1664 N. Virginia Street Mail Stop 0296, Reno, NV 89557, USA
| | - David F Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Claude Alain
- Rotman Research Institute, Baycrest Health Sciences, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada, 4505 Maryland Parkway Mail Stop 5030, Las Vegas, NV 89154, USA
| |
Collapse
|
4
|
Nicastri M, Giallini I, Inguscio BMS, Turchetta R, Guerzoni L, Cuda D, Portanova G, Ruoppolo G, Dincer D'Alessandro H, Mancini P. The influence of auditory selective attention on linguistic outcomes in deaf and hard of hearing children with cochlear implants. Eur Arch Otorhinolaryngol 2023; 280:115-124. [PMID: 35831674 DOI: 10.1007/s00405-022-07463-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/23/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Auditory selective attention (ASA) is crucial to focus on significant auditory stimuli without being distracted by irrelevant auditory signals and plays an important role in language development. The present study aimed to investigate the unique contribution of ASA to the linguistic levels achieved by a group of cochlear implanted (CI) children. METHODS Thirty-four CI children with a median age of 10.05 years were tested using both the "Batteria per la Valutazione dell'Attenzione Uditiva e della Memoria di Lavoro Fonologica nell'età evolutiva-VAUM-ELF" to assess their ASA skills, and two Italian standardized tests to measure lexical and morphosyntactic skills. A regression analysis, including demographic and audiological variables, was conducted to assess the unique contribution of ASA to language skills. RESULTS The percentages of CI children with adequate ASA performances ranged from 50 to 29.4%. Bilateral CI children performed better than their monolateral peers. ASA skills contributed significantly to linguistic skills, accounting alone for the 25% of the observed variance. CONCLUSIONS The present findings are clinically relevant as they highlight the importance to assess ASA skills as early as possible, reflecting their important role in language development. Using simple clinical tools, ASA skills could be studied at early developmental stages. This may provide additional information to outcomes from traditional auditory tests and may allow us to implement specific training programs that could positively contribute to the development of neural mechanisms of ASA and, consequently, induce improvements in language skills.
Collapse
Affiliation(s)
- Maria Nicastri
- Department of Sense Organs, Sapienza University, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University, Rome, Italy
| | | | | | - Letizia Guerzoni
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | - Domenico Cuda
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | | | - Giovanni Ruoppolo
- I.R.C.C.S. San Raffaele Pisana, Via Nomentana, 401, 00162, Rome, Italy
| | | | | |
Collapse
|
5
|
Weichenberger M, Bug MU, Brühl R, Ittermann B, Koch C, Kühn S. Air-conducted ultrasound below the hearing threshold elicits functional changes in the cognitive control network. PLoS One 2022; 17:e0277727. [PMID: 36512612 PMCID: PMC9747049 DOI: 10.1371/journal.pone.0277727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 11/02/2022] [Indexed: 12/15/2022] Open
Abstract
Air-conducted ultrasound (> 17.8 kHz; US) is produced by an increasing number of technical devices in our daily environment. While several studies indicate that exposure to US in public spaces can lead to subjective symptoms such as 'annoyance' or 'difficulties in concentration', the effects of US on brain activity are poorly understood. In the present study, individual hearing thresholds (HT) for sounds in the US frequency spectrum were assessed in 21 normal-hearing participants. The effects of US were then investigated by means of functional magnetic resonance imaging (fMRI). 15 of these participants underwent three resting-state acquisitions, two with a 21.5 kHz tone presented monaurally at 5 dB above (ATC) and 10 dB below (BTC) the HT and one without auditory stimulation (NTC), as well as three runs of an n-back working memory task involving similar stimulus conditions (n-ATC, n-BTC, n-NTC). Comparing data gathered during n-NTC vs. fixation, we found that task performance was associated with the recruitment of regions within the cognitive control network, including prefrontal and parietal areas as well as the cerebellum. Direct contrasts of the two stimulus conditions (n-ATC & n-BTC) vs. n-NTC showed no significant differences in brain activity, irrespective of whether a whole-brain or a region of interest approach with primary auditory cortex as the seed was used. Likewise, no differences were found when the resting-state runs were compared. However, contrast analysis (n-BTC vs. n-ATC) revealed a strong activation in bilateral inferior frontal gyrus (IFG, triangular part) only when US was presented below the HT (p < 0.001, cluster > 30). In addition, IFG activation was also associated with faster reaction times during n-BTC (p = 0.033) as well as with verbal reports obtained after resting-state, i.e., the more unpleasant sound was perceived during BTC vs. ATC, the higher activation in bilateral IFG was and vice versa (p = 0.003). While this study provides no evidence for activation of primary auditory cortex in response to audible US (even though participants heard the sounds), it indicates that US can lead to changes in the cognitive control network and affect cognitive performance only when presented below the HT. Activation of bilateral IFG could reflect an increase in cognitive demand when focusing on task performance in the presence of slightly unpleasant and/or distracting US that may not be fully controllable by attentional mechanisms.
Collapse
Affiliation(s)
- Markus Weichenberger
- Max Planck Institute for Human Development, Lise Meitner Group for Environmental Neuroscience, Berlin, Germany
- * E-mail:
| | - Marion U. Bug
- Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany
| | - Rüdiger Brühl
- Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany
| | - Bernd Ittermann
- Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany
| | - Christian Koch
- Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany
| | - Simone Kühn
- Max Planck Institute for Human Development, Lise Meitner Group for Environmental Neuroscience, Berlin, Germany
- University Clinic Hamburg-Eppendorf, Clinic and Policlinic for Psychiatry and Psychotherapy, Hamburg, Germany
| |
Collapse
|
6
|
Torppa R, Kuuluvainen S, Lipsanen J. The development of cortical processing of speech differs between children with cochlear implants and normal hearing and changes with parental singing. Front Neurosci 2022; 16:976767. [PMID: 36507354 PMCID: PMC9731313 DOI: 10.3389/fnins.2022.976767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/04/2022] [Indexed: 11/21/2022] Open
Abstract
Objective The aim of the present study was to investigate speech processing development in children with normal hearing (NH) and cochlear implants (CI) groups using a multifeature event-related potential (ERP) paradigm. Singing is associated to enhanced attention and speech perception. Therefore, its connection to ERPs was investigated in the CI group. Methods The paradigm included five change types in a pseudoword: two easy- (duration, gap) and three difficult-to-detect (vowel, pitch, intensity) with CIs. The positive mismatch responses (pMMR), mismatch negativity (MMN), P3a and late differentiating negativity (LDN) responses of preschoolers (below 6 years 9 months) and schoolchildren (above 6 years 9 months) with NH or CIs at two time points (T1, T2) were investigated with Linear Mixed Modeling (LMM). For the CI group, the association of singing at home and ERP development was modeled with LMM. Results Overall, responses elicited by the easy- and difficult to detect changes differed between the CI and NH groups. Compared to the NH group, the CI group had smaller MMNs to vowel duration changes and gaps, larger P3a responses to gaps, and larger pMMRs and smaller LDNs to vowel identity changes. Preschoolers had smaller P3a responses and larger LDNs to gaps, and larger pMMRs to vowel identity changes than schoolchildren. In addition, the pMMRs to gaps increased from T1 to T2 in preschoolers. More parental singing in the CI group was associated with increasing pMMR and less parental singing with decreasing P3a amplitudes from T1 to T2. Conclusion The multifeature paradigm is suitable for assessing cortical speech processing development in children. In children with CIs, cortical discrimination is often reflected in pMMR and P3a responses, and in MMN and LDN responses in children with NH. Moreover, the cortical speech discrimination of children with CIs develops late, and over time and age, their speech sound change processing changes as does the processing of children with NH. Importantly, multisensory activities such as parental singing can lead to improvement in the discrimination and attention shifting toward speech changes in children with CIs. These novel results should be taken into account in future research and rehabilitation.
Collapse
Affiliation(s)
- Ritva Torppa
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Centre of Excellence in Music, Mind, Body and Brain, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Soila Kuuluvainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Department of Digital Humanities, Faculty of Arts, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
7
|
Bonomo ME, Brandt AK, Frazier JT, Karmonik C. Music to My Ears : Neural modularity and flexibility differ in response to real-world music stimuli. IBRO Neurosci Rep 2022; 12:98-107. [PMID: 35106517 PMCID: PMC8784322 DOI: 10.1016/j.ibneur.2021.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 12/28/2021] [Indexed: 10/30/2022] Open
Abstract
Music listening involves many simultaneous neural operations, including auditory processing, working memory, temporal sequencing, pitch tracking, anticipation, reward, and emotion, and thus, a full investigation of music cognition would benefit from whole-brain analyses. Here, we quantify whole-brain activity while participants listen to a variety of music and speech auditory pieces using two network measures that are grounded in complex systems theory: modularity, which measures the degree to which brain regions are interacting in communities, and flexibility, which measures the rate that brain regions switch the communities to which they belong. In a music and brain connectivity study that is part of a larger clinical investigation into music listening and stroke recovery at Houston Methodist Hospital's Center for Performing Arts Medicine, functional magnetic resonance imaging (fMRI) was performed on healthy participants while they listened to self-selected music to which they felt a positive emotional attachment, as well as culturally familiar music (J.S. Bach), culturally unfamiliar music (Gagaku court music of medieval Japan), and several excerpts of speech. There was a marked contrast among the whole-brain networks during the different types of auditory pieces, in particular for the unfamiliar music. During the self-selected and Bach tracks, participants' whole-brain networks exhibited modular organization that was significantly coordinated with the network flexibility. Meanwhile, when the Gagaku music was played, this relationship between brain network modularity and flexibility largely disappeared. In addition, while the auditory cortex's flexibility during the self-selected piece was equivalent to that during Bach, it was more flexible during Gagaku. The results suggest that the modularity and flexibility measures of whole-brain activity have the potential to lead to new insights into the complex neural function that occurs during music perception of real-world songs.
Collapse
Affiliation(s)
- Melia E. Bonomo
- Department of Physics and Astronomy, Rice University, Houston, TX, USA
- Center for Theoretical Biological Physics, Rice University, Houston, TX, USA
| | | | - J. Todd Frazier
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA
| | - Christof Karmonik
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA
- MRI Core, Houston Methodist Research Institute, Houston, TX, USA
- Department of Radiology, Weill Cornell Medical College, New York, NY, USA
| |
Collapse
|
8
|
Bálint A, Wimmer W, Caversaccio M, Weder S. Neural Activity during Audiovisual Speech Processing: Protocol for a Functional Neuroimaging Study (Preprint). JMIR Res Protoc 2022; 11:e38407. [PMID: 35727624 PMCID: PMC9239541 DOI: 10.2196/38407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/12/2022] [Accepted: 06/03/2022] [Indexed: 11/21/2022] Open
Abstract
Background Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be controlled. To gain further understanding, a careful preparation and planning of such a functional neuroimaging task is key. Objective Using fNIRS, our main objective is to develop a well-controlled audiovisual speech comprehension task to study brain activation in individuals with normal hearing and hearing impairment (including cochlear implant users). The task should be deductible from clinically established tests, induce maximal cortical activation, use optimal coverage of relevant brain regions, and be reproducible by other research groups. Methods The protocol will consist of a 5-minute resting state and 2 stimulation periods that are 12 minutes each. During the stimulation periods, 13-second video recordings of the clinically established Oldenburg Sentence Test (OLSA) will be presented. Stimuli will be presented in 4 different modalities: (1) speech in quiet, (2) speech in noise, (3) visual only (ie, lipreading), and (4) audiovisual speech. Each stimulus type will be repeated 10 times in a counterbalanced block design. Interactive question windows will monitor speech comprehension during the task. After the measurement, we will perform a 3D scan to digitize optode positions and verify the covered anatomical locations. Results This paper reports the study protocol. Enrollment for the study started in August 2021. We expect to publish our first results by the end of 2022. Conclusions The proposed audiovisual speech comprehension task will help elucidate neural correlates to speech understanding. The comprehensive study will have the potential to provide additional information beyond the conventional clinical standards about the underlying plastic brain changes of a hearing-impaired person. It will facilitate more precise indication criteria for cochlear implantation and better planning of rehabilitation. International Registered Report Identifier (IRRID) DERR1-10.2196/38407
Collapse
Affiliation(s)
- András Bálint
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Wilhelm Wimmer
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Marco Caversaccio
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Stefan Weder
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Heald SLM, Van Hedger SC, Veillette J, Reis K, Snyder JS, Nusbaum HC. Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning. J Cogn Neurosci 2022; 34:425-444. [PMID: 34942645 PMCID: PMC8832160 DOI: 10.1162/jocn_a_01805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest-posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1-P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1-P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity.
Collapse
|
10
|
Zachlod D, Kedo O, Amunts K. Anatomy of the temporal lobe: From macro to micro. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:17-51. [PMID: 35964970 DOI: 10.1016/b978-0-12-823493-8.00009-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The temporal cortex encompasses a large number of different areas ranging from the six-layered isocortex to the allocortex. The areas support auditory, visual, and language processing, as well as emotions and memory. The primary auditory cortex is found at the Heschl gyri, which develop early in ontogeny with the Sylvian fissure, a deep and characteristic fissure that separates the temporal lobe from the parietal and frontal lobes. Gyri and sulci as well as brain areas vary between brains and between hemispheres, partly linked to the functional organization of language and lateralization. Interindividual variability in anatomy makes a direct comparison between different brains in structure-functional analysis often challenging, but can be addressed by applying cytoarchitectonic probability maps of the Julich-Brain atlas. We review the macroanatomy of the temporal lobe, its variability and asymmetry at the macro- and the microlevel, discuss the relationship to brain areas and their microstructure, and emphasize the advantage of a multimodal approach to address temporal lobe organization. We review recent data on combined cytoarchitectonic and molecular architectonic studies of temporal areas, and provide links to their function.
Collapse
Affiliation(s)
- Daniel Zachlod
- Institute of Neuroscience and Medicine, INM-1, Research Centre Juelich, Juelich, Germany
| | - Olga Kedo
- Institute of Neuroscience and Medicine, INM-1, Research Centre Juelich, Juelich, Germany
| | - Katrin Amunts
- Institute of Neuroscience and Medicine, INM-1, Research Centre Juelich, Juelich, Germany; C&O Vogt Institute for Brain Research, University Hospital Düsseldorf, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany.
| |
Collapse
|
11
|
An WW, Nelson CA, Wilkinson CL. Neural response to repeated auditory stimuli and its association with early language ability in male children with Fragile X syndrome. Front Integr Neurosci 2022; 16:987184. [PMID: 36452884 PMCID: PMC9702328 DOI: 10.3389/fnint.2022.987184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 10/24/2022] [Indexed: 11/16/2022] Open
Abstract
Background Fragile X syndrome (FXS) is the most prevalent form of inherited intellectual disability and is commonly associated with autism. Previous studies have linked the structural and functional alterations in FXS with impaired sensory processing and sensory hypersensitivity, which may hinder the early development of cognitive functions such as language comprehension. In this study, we compared the P1 response of the auditory evoked potential and its habituation to repeated auditory stimuli in male children (2-7 years old) with and without FXS, and examined their association with clinical measures in these two groups. Methods We collected high-density electroencephalography (EEG) data in an auditory oddball paradigm from 12 male children with FXS and 11 age- and sex-matched typically developing (TD) children. After standardized EEG pre-processing, we conducted a spatial principal component (PC) analysis and identified two major PCs-a frontal PC and a temporal PC. Within each PC, we compared the P1 amplitude and inter-trial phase coherence (ITPC) between the two groups, and performed a series of linear regression analysis to study the association between these EEG measures and several clinical measures, including assessment scores for language abilities, non-verbal skills, and sensory hypersensitivity. Results At the temporal PC, both early and late standard stimuli evoked a larger P1 response in FXS compared to TD participants. For temporal ITPC, the TD group showed greater habituation than the FXS group. However, neither group showed significant habituation of the frontal or temporal P1 response. Despite lack of habituation, exploratory analysis of brain-behavior associations observed that within the FXS group, reduced frontal P1 response to late standard stimuli, and increased frontal P1 habituation were both associated with better language scores. Conclusion We identified P1 amplitude and ITPC in the temporal region as a contrasting EEG phenotype between the FXS and the TD groups. However, only frontal P1 response and habituation were associated with language measures. Larger longitudinal studies are required to determine whether these EEG measures could be used as biomarkers for language development in patients with FXS.
Collapse
Affiliation(s)
- Winko W An
- Division of Developmental Medicine, Boston Children's Hospital, Boston, MA, United States.,Translational Neuroscience Center, Boston Children's Hospital, Boston, MA, United States.,Harvard Medical School, Boston, MA, United States
| | - Charles A Nelson
- Division of Developmental Medicine, Boston Children's Hospital, Boston, MA, United States.,Harvard Medical School, Boston, MA, United States.,Harvard Graduate School of Education, Cambridge, MA, United States
| | - Carol L Wilkinson
- Division of Developmental Medicine, Boston Children's Hospital, Boston, MA, United States.,Harvard Medical School, Boston, MA, United States
| |
Collapse
|
12
|
A distributed network supports spatiotemporal cerebral dynamics of visual naming. Clin Neurophysiol 2021; 132:2948-2958. [PMID: 34715419 DOI: 10.1016/j.clinph.2021.09.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 08/31/2021] [Accepted: 09/18/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE Cerebral spatiotemporal dynamics of visual naming were investigated in epilepsy patients undergoing stereo-electroencephalography (SEEG) monitoring. METHODS Brain networks were defined by Parcel-Activation-Resection-Symptom matching (PARS) approach by matching high-gamma (50-150 Hz) modulations (HGM) in neuroanatomic parcels during visual naming, with neuropsychological outcomes after resection/ablation of those parcels. Brain parcels with >50% electrode contacts simultaneously showing significant HGM were aligned, to delineate spatiotemporal course of naming-related HGM. RESULTS In 41 epilepsy patients, neuroanatomic parcels showed sequential yet temporally overlapping HGM course during visual naming. From bilateral occipital lobes, HGM became increasingly left lateralized, coursing through limbic system. Bilateral superior temporal HGM was noted around response time, and right frontal HGM thereafter. Correlations between resected/ablated parcels, and post-surgical neuropsychological outcomes showed specific regional groupings. CONCLUSIONS Convergence of data from spatiotemporal course of HGM during visual naming, and functional role of specific parcels inferred from neuropsychological deficits after resection/ablation of those parcels, support a model with six cognitive subcomponents of visual naming having overlapping temporal profiles. SIGNIFICANCE Cerebral substrates supporting visual naming are bilaterally distributed with relative hemispheric contribution dependent on cognitive demands at a specific time. PARS approach can be extended to study other cognitive and functional brain networks.
Collapse
|
13
|
Kiremitçi I, Yilmaz Ö, Çelik E, Shahdloo M, Huth AG, Çukur T. Attentional Modulation of Hierarchical Speech Representations in a Multitalker Environment. Cereb Cortex 2021; 31:4986-5005. [PMID: 34115102 PMCID: PMC8491717 DOI: 10.1093/cercor/bhab136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 04/01/2021] [Accepted: 04/21/2021] [Indexed: 11/13/2022] Open
Abstract
Humans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.
Collapse
Affiliation(s)
- Ibrahim Kiremitçi
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Özgür Yilmaz
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
| | - Emin Çelik
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Mo Shahdloo
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, UK
| | - Alexander G Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| | - Tolga Çukur
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| |
Collapse
|
14
|
Khalighinejad B, Patel P, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Functional characterization of human Heschl's gyrus in response to natural speech. Neuroimage 2021; 235:118003. [PMID: 33789135 PMCID: PMC8608271 DOI: 10.1016/j.neuroimage.2021.118003] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 03/23/2021] [Accepted: 03/25/2021] [Indexed: 01/11/2023] Open
Abstract
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Prachi Patel
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Jose L. Herrero
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Ashesh D. Mehta
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States,Corresponding author at: Department of Electrical Engineering, Columbia University, New York, NY, United States. (B. Khalighinejad), (P. Patel), (J.L. Herrero), (S. Bickel), (A.D. Mehta), (N. Mesgarani)
| |
Collapse
|
15
|
Abstract
OBJECTIVES Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness). DESIGN Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor. RESULTS Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone. CONCLUSIONS Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.
Collapse
|
16
|
Regev M, Halpern AR, Owen AM, Patel AD, Zatorre RJ. Mapping Specific Mental Content during Musical Imagery. Cereb Cortex 2021; 31:3622-3640. [PMID: 33749742 DOI: 10.1093/cercor/bhab036] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/12/2022] Open
Abstract
Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.
Collapse
Affiliation(s)
- Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Adrian M Owen
- Brain and Mind Institute, Department of Psychology and Department of Physiology and Pharmacology, Western University, London, ON N6A 5B7, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| | - Aniruddh D Patel
- Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program.,Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| |
Collapse
|
17
|
Bourke JD, Todd J. Acoustics versus linguistics? Context is Part and Parcel to lateralized processing of the parts and parcels of speech. Laterality 2021; 26:725-765. [PMID: 33726624 DOI: 10.1080/1357650x.2021.1898415] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The purpose of this review is to provide an accessible exploration of key considerations of lateralization in speech and non-speech perception using clear and defined language. From these considerations, the primary arguments for each side of the linguistics versus acoustics debate are outlined and explored in context of emerging integrative theories. This theoretical approach entails a perspective that linguistic and acoustic features differentially contribute to leftward bias, depending on the given context. Such contextual factors include stimulus parameters and variables of stimulus presentation (e.g., noise/silence and monaural/binaural) and variances in individuals (sex, handedness, age, and behavioural ability). Discussion of these factors and their interaction is also aimed towards providing an outline of variables that require consideration when developing and reviewing methodology of acoustic and linguistic processing laterality studies. Thus, there are three primary aims in the present paper: (1) to provide the reader with key theoretical perspectives from the acoustics/linguistics debate and a synthesis of the two viewpoints, (2) to highlight key caveats for generalizing findings regarding predominant models of speech laterality, and (3) to provide a practical guide for methodological control using predominant behavioural measures (i.e., gap detection and dichotic listening tasks) and/or neurophysiological measures (i.e., mismatch negativity) of speech laterality.
Collapse
Affiliation(s)
- Jesse D Bourke
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| | - Juanita Todd
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| |
Collapse
|
18
|
Behler O, Uppenkamp S. Contextual effects on loudness judgments for sounds with continuous changes of intensity are reflected in nonauditory areas. Hum Brain Mapp 2021; 42:1742-1757. [PMID: 33544429 PMCID: PMC7978131 DOI: 10.1002/hbm.25325] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 11/05/2020] [Accepted: 12/06/2020] [Indexed: 11/19/2022] Open
Abstract
Psychoacoustic research suggests that judgments of perceived loudness change differ significantly between sounds with continuous increases and decreases of acoustic intensity, often referred to as “up‐ramps” and “down‐ramps.” The magnitude and direction of this difference, in turn, appears to depend on focused attention and the specific task performed by the listeners. This has led to the suspicion that cognitive processes play an important role in the development of the observed context effects. The present study addressed this issue by exploring neural correlates of context‐dependent loudness judgments. Normal hearing listeners continuously judged the loudness of complex‐tone sequences which slowly changed in level over time while auditory fMRI was performed. Regression models that included information either about presented sound levels or about individual loudness judgments were used to predict activation throughout the brain. Our psychoacoustical data confirmed robust effects of the direction of intensity change on loudness judgments. Specifically, stimuli were judged softer when following a down‐ramp, and louder in the context of an up‐ramp. Levels and loudness estimates significantly predicted activation in several brain areas, including auditory cortex. However, only activation in nonauditory regions was more accurately predicted by context‐dependent loudness estimates as compared with sound levels, particularly in the orbitofrontal cortex and medial temporal areas. These findings support the idea that cognitive aspects contribute to the generation of context effects with respect to continuous loudness judgments.
Collapse
Affiliation(s)
- Oliver Behler
- Medical Physics and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, Faculty VI Medicine and Health SciencesCarl von Ossietzky Universität OldenburgOldenburgGermany
| | - Stefan Uppenkamp
- Medical Physics and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, Faculty VI Medicine and Health SciencesCarl von Ossietzky Universität OldenburgOldenburgGermany
| |
Collapse
|
19
|
Talker familiarity and the accommodation of talker variability. Atten Percept Psychophys 2021; 83:1842-1860. [PMID: 33398658 DOI: 10.3758/s13414-020-02203-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2020] [Indexed: 11/08/2022]
Abstract
A fundamental problem in speech perception is how (or whether) listeners accommodate variability in the way talkers produce speech. One view of the way listeners cope with this variability is that talker differences are normalized - a mapping between talker-specific characteristics and phonetic categories is computed such that speech is recognized in the context of the talker's vocal characteristics. Consistent with this view, listeners process speech more slowly when the talker changes randomly than when the talker remains constant. An alternative view is that speech perception is based on talker-specific auditory exemplars in memory clustered around linguistic categories that allow talker-independent perception. Consistent with this view, listeners become more efficient at talker-specific phonetic processing after voice identification training. We asked whether phonetic efficiency would increase with talker familiarity by testing listeners with extremely familiar talkers (family members), newly familiar talkers (based on laboratory training), and unfamiliar talkers. We also asked whether familiarity would reduce the need for normalization. As predicted, phonetic efficiency (word recognition in noise) increased with familiarity (unfamiliar < trained-on < family). However, we observed a constant processing cost for talker changes even for pairs of family members. We discuss how normalization and exemplar theories might account for these results, and constraints the results impose on theoretical accounts of phonetic constancy.
Collapse
|
20
|
Kraft JN, O'Shea A, Albizu A, Evangelista ND, Hausman HK, Boutzoukas E, Nissim NR, Van Etten EJ, Bharadwaj PK, Song H, Smith SG, Porges E, DeKosky S, Hishaw GA, Wu S, Marsiske M, Cohen R, Alexander GE, Woods AJ. Structural Neural Correlates of Double Decision Performance in Older Adults. Front Aging Neurosci 2020; 12:278. [PMID: 33117145 PMCID: PMC7493680 DOI: 10.3389/fnagi.2020.00278] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 08/11/2020] [Indexed: 11/13/2022] Open
Abstract
Speed of processing is a cognitive domain that encompasses the speed at which an individual can perceive a given stimulus, interpret the information, and produce a correct response. Speed of processing has been shown to decline more rapidly than other cognitive domains in an aging population, suggesting that this domain is particularly vulnerable to cognitive aging (Chee et al., 2009). However, given the heterogeneity of neuropsychological measures used to assess the domains underpinning speed of processing, a diffuse pattern of brain regions has been implicated. The current study aims to investigate the structural neural correlates of speed of processing by assessing cortical volume and speed of processing scores on the POSIT Double Decision task within a healthy older adult population (N = 186; mean age = 71.70 ± 5.32 years). T1-weighted structural images were collected via a 3T Siemens scanner. The current study shows that less cortical thickness in right temporal, posterior frontal, parietal and occipital lobe structures were significantly associated with poorer Double Decision scores. Notably, these include the lateral orbitofrontal gyrus, precentral gyrus, superior, transverse, and inferior temporal gyrus, temporal pole, insula, parahippocampal gyrus, fusiform gyrus, lingual gyrus, superior and inferior parietal gyrus and lateral occipital gyrus. Such findings suggest that speed of processing performance is associated with a wide array of cortical regions that provide unique contributions to performance on the Double Decision task.
Collapse
Affiliation(s)
- Jessica N Kraft
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Neuroscience, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Andrew O'Shea
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Alejandro Albizu
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Neuroscience, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Nicole D Evangelista
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Hanna K Hausman
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Emanuel Boutzoukas
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States
| | - Nicole R Nissim
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Emily J Van Etten
- Brain Imaging, Behavior and Aging Laboratory, Department of Psychology and Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, United States
| | - Pradyumna K Bharadwaj
- Brain Imaging, Behavior and Aging Laboratory, Department of Psychology and Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, United States
| | - Hyun Song
- Brain Imaging, Behavior and Aging Laboratory, Department of Psychology and Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, United States
| | - Samantha G Smith
- Brain Imaging, Behavior and Aging Laboratory, Department of Psychology and Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, United States
| | - Eric Porges
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Steven DeKosky
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Neurology, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Georg A Hishaw
- Department of Psychiatry, Neuroscience and Physiological Sciences Graduate Interdisciplinary Programs, and BIO5 Institute, University of Arizona and Arizona Alzheimer's Consortium, Tucson, AZ, United States
| | - Samuel Wu
- Department of Biostatistics, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Michael Marsiske
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Ronald Cohen
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| | - Gene E Alexander
- Brain Imaging, Behavior and Aging Laboratory, Department of Psychology and Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, United States.,Department of Psychiatry, Neuroscience and Physiological Sciences Graduate Interdisciplinary Programs, and BIO5 Institute, University of Arizona and Arizona Alzheimer's Consortium, Tucson, AZ, United States
| | - Adam J Woods
- Center for Cognitive Aging and Memory Clinical Translational Research, McKnight Brain Institute, University of Florida, Gainesville, FL, United States.,Department of Neuroscience, College of Medicine, University of Florida, Gainesville, FL, United States.,Department of Clinical and Health Psychology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, United States
| |
Collapse
|
21
|
Besle J, Mougin O, Sánchez-Panchuelo RM, Lanting C, Gowland P, Bowtell R, Francis S, Krumbholz K. Is Human Auditory Cortex Organization Compatible With the Monkey Model? Contrary Evidence From Ultra-High-Field Functional and Structural MRI. Cereb Cortex 2020; 29:410-428. [PMID: 30357410 PMCID: PMC6294415 DOI: 10.1093/cercor/bhy267] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Indexed: 11/14/2022] Open
Abstract
It is commonly assumed that the human auditory cortex is organized similarly to that of macaque monkeys, where the primary region, or "core," is elongated parallel to the tonotopic axis (main direction of tonotopic gradients), and subdivided across this axis into up to 3 distinct areas (A1, R, and RT), with separate, mirror-symmetric tonotopic gradients. This assumption, however, has not been tested until now. Here, we used high-resolution ultra-high-field (7 T) magnetic resonance imaging (MRI) to delineate the human core and map tonotopy in 24 individual hemispheres. In each hemisphere, we assessed tonotopic gradients using principled, quantitative analysis methods, and delineated the core using 2 independent (functional and structural) MRI criteria. Our results indicate that, contrary to macaques, the human core is elongated perpendicular rather than parallel to the main tonotopic axis, and that this axis contains no more than 2 mirror-reversed gradients within the core region. Previously suggested homologies between these gradients and areas A1 and R in macaques were not supported. Our findings suggest fundamental differences in auditory cortex organization between humans and macaques.
Collapse
Affiliation(s)
- Julien Besle
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Psychology, American University of Beirut, Riad El-Solh, Beirut, Lebanon
| | - Olivier Mougin
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Rosa-María Sánchez-Panchuelo
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Cornelis Lanting
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Otorhinolaryngology, Radboud University Medical Center, University of Nijmegen, Nijmegen, Netherlands
| | - Penny Gowland
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Richard Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Susan Francis
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Katrin Krumbholz
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK
| |
Collapse
|
22
|
Behler O, Uppenkamp S. Activation in human auditory cortex in relation to the loudness and unpleasantness of low-frequency and infrasound stimuli. PLoS One 2020; 15:e0229088. [PMID: 32084171 PMCID: PMC7034801 DOI: 10.1371/journal.pone.0229088] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/29/2020] [Indexed: 11/18/2022] Open
Abstract
Low frequency noise (LFS) and infrasound (IS) are controversially discussed as potential causes of annoyance and distress experienced by many people. However, the perception mechanisms for IS in the human auditory system are not completely understood yet. In the present study, sinusoids at 32 Hz (at the lower limit of melodic pitch for tonal stimulation), as well as 8 Hz (IS range) were presented to a group of 20 normal hearing subjects, using monaural stimulation via a loudspeaker sound source coupled to the ear canal by a long silicone rubber tube. Each participant attended two experimental sessions. In the first session, participants performed a categorical loudness scaling procedure as well as an unpleasantness rating task in a sound booth. In the second session, the loudness scaling procedure was repeated while brain activation was measured using functional magnetic resonance imaging (fMRI). Subsequently, activation data were collected for the respective stimuli presented at fixed levels adjusted to the individual loudness judgments. Silent trials were included as a baseline condition. Our results indicate that the brain regions involved in processing LFS and IS are similar to those for sounds in the typical audio frequency range, i.e., mainly primary and secondary auditory cortex (AC). In spite of large variation across listeners with respect to judgments of loudness and unpleasantness, neural correlates of these interindividual differences could not yet be identified. Still, for individual listeners, fMRI activation in the AC was more closely related to individual perception than to the physical stimulus level.
Collapse
Affiliation(s)
- Oliver Behler
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- * E-mail:
| | - Stefan Uppenkamp
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
23
|
Motomura E, Inui K, Kawano Y, Nishihara M, Okada M. Effects of Sound-Pressure Change on the 40 Hz Auditory Steady-State Response and Change-Related Cerebral Response. Brain Sci 2019; 9:brainsci9080203. [PMID: 31426410 PMCID: PMC6721352 DOI: 10.3390/brainsci9080203] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 08/07/2019] [Accepted: 08/13/2019] [Indexed: 12/19/2022] Open
Abstract
The auditory steady-state response (ASSR) elicited by a periodic sound stimulus is a neural oscillation recorded by magnetoencephalography (MEG), which is phase-locked to the repeated sound stimuli. This ASSR phase alternates after an abrupt change in the feature of a periodic sound stimulus and returns to its steady-state value. An abrupt change also elicits a MEG component peaking at approximately 100-180 ms (called "Change-N1m"). We investigated whether both the ASSR phase deviation and Change-N1m were affected by the magnitude of change in sound pressure. The ASSR and Change-N1m to 40 Hz click-trains (1000 ms duration, 70 dB), with and without an abrupt change (± 5, ± 10, or ± 15 dB) were recorded in ten healthy subjects. We used the source strength waveforms obtained by a two-dipole model for measurement of the ASSR phase deviation and Change-N1m values (peak amplitude and latency). As the magnitude of change increased, Change-N1m increased in amplitude and decreased in latency. Similarly, ASSR phase deviation depended on the magnitude of sound-pressure change. Thus, we suspect that both Change-N1m and the ASSR phase deviation reflect the sensitivity of the brain's neural change-detection system.
Collapse
Affiliation(s)
- Eishi Motomura
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan.
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Human Service Center, Kasugai 480-0392, Japan
| | - Yasuhiro Kawano
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan
| | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute 480-1195, Japan
| | - Motohiro Okada
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan
| |
Collapse
|
24
|
Wikman P, Rinne T, Petkov CI. Reward cues readily direct monkeys' auditory performance resulting in broad auditory cortex modulation and interaction with sites along cholinergic and dopaminergic pathways. Sci Rep 2019; 9:3055. [PMID: 30816142 PMCID: PMC6395775 DOI: 10.1038/s41598-019-38833-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 12/28/2018] [Indexed: 11/18/2022] Open
Abstract
In natural settings, the prospect of reward often influences the focus of our attention, but how cognitive and motivational systems influence sensory cortex is not well understood. Also, challenges in training nonhuman animals on cognitive tasks complicate cross-species comparisons and interpreting results on the neurobiological bases of cognition. Incentivized attention tasks could expedite training and evaluate the impact of attention on sensory cortex. Here we develop an Incentivized Attention Paradigm (IAP) and use it to show that macaque monkeys readily learn to use auditory or visual reward cues, drastically influencing their performance within a simple auditory task. Next, this paradigm was used with functional neuroimaging to measure activation modulation in the monkey auditory cortex. The results show modulation of extensive auditory cortical regions throughout primary and non-primary regions, which although a hallmark of attentional modulation in human auditory cortex, has not been studied or observed as broadly in prior data from nonhuman animals. Psycho-physiological interactions were identified between the observed auditory cortex effects and regions including basal forebrain sites along acetylcholinergic and dopaminergic pathways. The findings reveal the impact and regional interactions in the primate brain during an incentivized attention engaging auditory task.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, 00014, Helsinki, Finland.
| | - Teemu Rinne
- Turku Brain and Mind Center, Department of Clinical Medicine, University of Turku, 20014, Turku, Finland.
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom. .,Centre for Behaviour and Evolution, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom.
| |
Collapse
|
25
|
Whitehead JC, Armony JL. Singing in the brain: Neural representation of music and voice as revealed by fMRI. Hum Brain Mapp 2018; 39:4913-4924. [PMID: 30120854 DOI: 10.1002/hbm.24333] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/25/2018] [Accepted: 07/15/2018] [Indexed: 12/13/2022] Open
Abstract
The ubiquity of music across cultures as a means of emotional expression, and its proposed evolutionary relation to speech, motivated researchers to attempt a characterization of its neural representation. Several neuroimaging studies have reported that specific regions in the anterior temporal lobe respond more strongly to music than to other auditory stimuli, including spoken voice. Nonetheless, because most studies have employed instrumental music, which has important acoustic distinctions from human voice, questions still exist as to the specificity of the observed "music-preferred" areas. Here, we sought to address this issue by testing 24 healthy young adults with fast, high-resolution fMRI, to record neural responses to a large and varied set of musical stimuli, which, critically, included a capella singing, as well as purely instrumental excerpts. Our results confirmed that music; vocal or instrumental, preferentially engaged regions in the superior STG, particularly in the anterior planum polare, bilaterally. In contrast, human voice, either spoken or sung, activated more strongly a large area along the superior temporal sulcus. Findings were consistent between univariate and multivariate analyses, as well as with the use of a "silent" sparse acquisition sequence that minimizes any potential influence of scanner noise on the resulting activations. Activity in music-preferred regions could not be accounted for by any basic acoustic parameter tested, suggesting these areas integrate, likely in a nonlinear fashion, a combination of acoustic attributes that, together, result in the perceived musicality of the stimuli, consistent with proposed hierarchical processing of complex auditory information within the temporal lobes.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, Canada
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
26
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
27
|
Fisher JM, Dick FK, Levy DF, Wilson SM. Neural representation of vowel formants in tonotopic auditory cortex. Neuroimage 2018; 178:574-582. [PMID: 29860083 DOI: 10.1016/j.neuroimage.2018.05.072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/29/2018] [Accepted: 05/30/2018] [Indexed: 11/25/2022] Open
Abstract
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Collapse
Affiliation(s)
- Julia M Fisher
- Department of Linguistics, University of Arizona, Tucson, AZ, USA; Statistics Consulting Laboratory, BIO5 Institute, University of Arizona, Tucson, AZ, USA
| | - Frederic K Dick
- Department of Psychological Sciences, Birkbeck College, University of London, UK; Birkbeck-UCL Center for Neuroimaging, London, UK; Department of Experimental Psychology, University College London, UK
| | - Deborah F Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
28
|
Weder S, Zhou X, Shoushtarian M, Innes-Brown H, McKay C. Cortical Processing Related to Intensity of a Modulated Noise Stimulus-a Functional Near-Infrared Study. J Assoc Res Otolaryngol 2018; 19:273-286. [PMID: 29633049 PMCID: PMC5962476 DOI: 10.1007/s10162-018-0661-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 02/19/2018] [Indexed: 12/30/2022] Open
Abstract
Sound intensity is a key feature of auditory signals. A profound understanding of cortical processing of this feature is therefore highly desirable. This study investigates whether cortical functional near-infrared spectroscopy (fNIRS) signals reflect sound intensity changes and where on the brain cortex maximal intensity-dependent activations are located. The fNIRS technique is particularly suitable for this kind of hearing study, as it runs silently. Twenty-three normal hearing subjects were included and actively participated in a counterbalanced block design task. Four intensity levels of a modulated noise stimulus with long-term spectrum and modulation characteristics similar to speech were applied, evenly spaced from 15 to 90 dB SPL. Signals from auditory processing cortical fields were derived from a montage of 16 optodes on each side of the head. Results showed that fNIRS responses originating from auditory processing areas are highly dependent on sound intensity level: higher stimulation levels led to higher concentration changes. Caudal and rostral channels showed different waveform morphologies, reflecting specific cortical signal processing of the stimulus. Channels overlying the supramarginal and caudal superior temporal gyrus evoked a phasic response, whereas channels over Broca's area showed a broad tonic pattern. This data set can serve as a foundation for future auditory fNIRS research to develop the technique as a hearing assessment tool in the normal hearing and hearing-impaired populations.
Collapse
Affiliation(s)
- Stefan Weder
- The Bionics Institute, East Melbourne, Australia.
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, Bern, Switzerland.
| | - Xin Zhou
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | | | - Hamish Innes-Brown
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | - Colette McKay
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
29
|
Oya H, Gander PE, Petkov CI, Adolphs R, Nourski KV, Kawasaki H, Howard MA, Griffiths TD. Neural phase locking predicts BOLD response in human auditory cortex. Neuroimage 2018; 169:286-301. [PMID: 29274745 PMCID: PMC6139034 DOI: 10.1016/j.neuroimage.2017.12.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 11/22/2017] [Accepted: 12/16/2017] [Indexed: 11/16/2022] Open
Abstract
Natural environments elicit both phase-locked and non-phase-locked neural responses to the stimulus in the brain. The interpretation of the BOLD signal to date has been based on an association of the non-phase-locked power of high-frequency local field potentials (LFPs), or the related spiking activity in single neurons or groups of neurons. Previous studies have not examined the prediction of the BOLD signal by phase-locked responses. We examined the relationship between the BOLD response and LFPs in the same nine human subjects from multiple corresponding points in the auditory cortex, using amplitude modulated pure tone stimuli of a duration to allow an analysis of phase locking of the sustained time period without contamination from the onset response. The results demonstrate that both phase locking at the modulation frequency and its harmonics, and the oscillatory power in gamma/high-gamma bands are required to predict the BOLD response. Biophysical models of BOLD signal generation in auditory cortex therefore require revision and the incorporation of both phase locking to rhythmic sensory stimuli and power changes in the ensemble neural activity.
Collapse
Affiliation(s)
- Hiroyuki Oya
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA.
| | - Phillip E Gander
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | | | - Ralph Adolphs
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kirill V Nourski
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Matthew A Howard
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Timothy D Griffiths
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UK
| |
Collapse
|
30
|
Rinne T, Muers RS, Salo E, Slater H, Petkov CI. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences? Cereb Cortex 2018; 27:3471-3484. [PMID: 28419201 PMCID: PMC5654311 DOI: 10.1093/cercor/bhx092] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Indexed: 11/22/2022] Open
Abstract
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals.
Collapse
Affiliation(s)
- Teemu Rinne
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
| | - Ross S Muers
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Emma Salo
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Heather Slater
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
31
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
32
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
33
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
34
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
35
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
36
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
37
|
Nourski KV, Banks MI, Steinschneider M, Rhone AE, Kawasaki H, Mueller RN, Todd MM, Howard MA. Electrocorticographic delineation of human auditory cortical fields based on effects of propofol anesthesia. Neuroimage 2017; 152:78-93. [PMID: 28254512 PMCID: PMC5432407 DOI: 10.1016/j.neuroimage.2017.02.061] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Revised: 02/13/2017] [Accepted: 02/21/2017] [Indexed: 12/20/2022] Open
Abstract
The functional organization of human auditory cortex remains incompletely characterized. While the posteromedial two thirds of Heschl's gyrus (HG) is generally considered to be part of core auditory cortex, additional subdivisions of HG remain speculative. To further delineate the hierarchical organization of human auditory cortex, we investigated regional heterogeneity in the modulation of auditory cortical responses under varying depths of anesthesia induced by propofol. Non-invasive studies have shown that propofol differentially affects auditory cortical activity, with a greater impact on non-core areas. Subjects were neurosurgical patients undergoing removal of intracranial electrodes placed to identify epileptic foci. Stimuli were 50Hz click trains, presented continuously during an awake baseline period, and subsequently, while propofol infusion was incrementally titrated to induce general anesthesia. Electrocorticographic recordings were made with depth electrodes implanted in HG and subdural grid electrodes implanted over superior temporal gyrus (STG). Depth of anesthesia was monitored using spectral entropy. Averaged evoked potentials (AEPs), frequency-following responses (FFRs) and high gamma (70-150Hz) event-related band power were used to characterize auditory cortical activity. Based on the changes in AEPs and FFRs during the induction of anesthesia, posteromedial HG could be divided into two subdivisions. In the most posteromedial aspect of the gyrus, the earliest AEP deflections were preserved and FFRs increased during induction. In contrast, the remainder of the posteromedial HG exhibited attenuation of both the AEP and the FFR. The anterolateral HG exhibited weaker activation characterized by broad, low-voltage AEPs and the absence of FFRs. Lateral STG exhibited limited activation by click trains, and FFRs there diminished during induction. Sustained high gamma activity was attenuated in the most posteromedial portion of HG, and was absent in all other regions. These differential patterns of auditory cortical activity during the induction of anesthesia may serve as useful physiological markers for field delineation. In this study, the posteromedial HG could be parcellated into at least two subdivisions. Preservation of the earliest AEP deflections and FFRs in the posteromedial HG likely reflects the persistence of feedforward synaptic activity generated by inputs from subcortical auditory pathways, including the medial geniculate nucleus.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA.
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin - Madison, Madison, WI, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rashmi N Mueller
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA
| | - Michael M Todd
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA; Department of Anesthesiology, University of Minnesota, Minneapolis, MN, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
38
|
Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations. Dev Cogn Neurosci 2017; 26:9-19. [PMID: 28436834 PMCID: PMC6987829 DOI: 10.1016/j.dcn.2017.04.004] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Revised: 04/04/2017] [Accepted: 04/11/2017] [Indexed: 11/22/2022] Open
Abstract
Active acoustic experience (AEx) in infancy impacts cortical oscillations. AEx infants show left Theta- and Gamma-band activity to complex tone pairs. Passive and naïve infants yield less distinct, more bilateral responses.
Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx), over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx) or maturation alone (Naïve Control, NC). Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD) elicited greater Theta-band (4–6 Hz) activity in Right Auditory Cortex (RAC), as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV) elicited larger responses in Left Auditory Cortex (LAC). PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz) activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.
Collapse
|
39
|
Perrone-Capano C, Volpicelli F, di Porzio U. Biological bases of human musicality. Rev Neurosci 2017; 28:235-245. [PMID: 28107174 DOI: 10.1515/revneuro-2016-0046] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 11/04/2016] [Indexed: 11/15/2022]
Abstract
Music is a universal language, present in all human societies. It pervades the lives of most human beings and can recall memories and feelings of the past, can exert positive effects on our mood, can be strongly evocative and ignite intense emotions, and can establish or strengthen social bonds. In this review, we summarize the research and recent progress on the origins and neural substrates of human musicality as well as the changes in brain plasticity elicited by listening or performing music. Indeed, music improves performance in a number of cognitive tasks and may have beneficial effects on diseased brains. The emerging picture begins to unravel how and why particular brain circuits are affected by music. Numerous studies show that music affects emotions and mood, as it is strongly associated with the brain's reward system. We can therefore assume that an in-depth study of the relationship between music and the brain may help to shed light on how the mind works and how the emotions arise and may improve the methods of music-based rehabilitation for people with neurological disorders. However, many facets of the mind-music connection still remain to be explored and enlightened.
Collapse
|
40
|
Wolak T, Cieśla K, Rusiniak M, Piłka A, Lewandowska M, Pluta A, Skarżyński H, Skarżyński PH. Influence of Acoustic Overstimulation on the Central Auditory System: An Functional Magnetic Resonance Imaging (fMRI) Study. Med Sci Monit 2016; 22:4623-4635. [PMID: 27893698 PMCID: PMC5132427 DOI: 10.12659/msm.897929] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Background The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. Material/Methods The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5–4.5 kHz sweeps. Results The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. Conclusions The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation.
Collapse
Affiliation(s)
- Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Mateusz Rusiniak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Adam Piłka
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Monika Lewandowska
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Agnieszka Pluta
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland
| | - Piotr H Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw/Kajetany, Poland.,Department of Heart Failure and Cardiac Rehabilitation, Medical University of Warsaw, Warsaw, Poland
| |
Collapse
|
41
|
Tuning to Binaural Cues in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:37-53. [PMID: 26466943 DOI: 10.1007/s10162-015-0546-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 09/25/2015] [Indexed: 10/22/2022] Open
Abstract
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Collapse
|
42
|
Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7TfMRI. Neuroimage 2016; 143:116-127. [PMID: 27608603 DOI: 10.1016/j.neuroimage.2016.09.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 08/15/2016] [Accepted: 09/04/2016] [Indexed: 11/23/2022] Open
Abstract
Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.
Collapse
|
43
|
Behler O, Uppenkamp S. The representation of level and loudness in the central auditory system for unilateral stimulation. Neuroimage 2016; 139:176-188. [PMID: 27318216 DOI: 10.1016/j.neuroimage.2016.06.025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Revised: 05/24/2016] [Accepted: 06/14/2016] [Indexed: 10/21/2022] Open
Abstract
Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and loudness.1.
Collapse
Affiliation(s)
- Oliver Behler
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Stefan Uppenkamp
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence Hearing4All, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| |
Collapse
|
44
|
Wikman PA, Vainio L, Rinne T. The effect of precision and power grips on activations in human auditory cortex. Front Neurosci 2015; 9:378. [PMID: 26528121 PMCID: PMC4606019 DOI: 10.3389/fnins.2015.00378] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 09/28/2015] [Indexed: 11/23/2022] Open
Abstract
The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC). Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks (motor regions were not imaged). During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the present strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.
Collapse
Affiliation(s)
- Patrik A Wikman
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Lari Vainio
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| |
Collapse
|
45
|
Schönwiesner M, Dechent P, Voit D, Petkov CI, Krumbholz K. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals. Cereb Cortex 2015; 25:3278-89. [PMID: 24904067 PMCID: PMC4585487 DOI: 10.1093/cercor/bhu124] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Auditory cortex (AC) contains several primary-like, or "core," fields, which receive thalamic input and project to non-primary "belt" fields. In humans, the organization and layout of core and belt auditory fields are still poorly understood, and most auditory neuroimaging studies rely on macroanatomical criteria, rather than functional localization of distinct fields. A myeloarchitectonic method has been suggested recently for distinguishing between core and belt fields in humans (Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. 2012. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. J Neurosci. 32:16095-16105). We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R. Our results provide a functional rather than anatomical localizer for core AC. We posit that inter-individual variability in the layout of core AC might explain disagreements between results from previous neuroimaging and cytological studies.
Collapse
Affiliation(s)
- Marc Schönwiesner
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Peter Dechent
- Department of Cognitive Neurology, MR-Research in Neurology and Psychiatry,University Medicine Göttingen, Göttingen, Germany
| | - Dirk Voit
- Biomedical NMR Research GmbH, Max-Planck-Institute for Biophysical Chemistry, Göttingen, Germany
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | |
Collapse
|
46
|
Choudhury NA, Parascando JA, Benasich AA. Effects of Presentation Rate and Attention on Auditory Discrimination: A Comparison of Long-Latency Auditory Evoked Potentials in School-Aged Children and Adults. PLoS One 2015; 10:e0138160. [PMID: 26368126 PMCID: PMC4569142 DOI: 10.1371/journal.pone.0138160] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 08/25/2015] [Indexed: 01/08/2023] Open
Abstract
Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100-100 Hz; deviant 100-300 Hz) separated by a 300, 70 or 10 ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6-11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100-300 ms indexing sensory processing, and a later peak (LDN), at about 400-600 ms, thought to reflect reorientation to the deviant stimuli or "second-look" processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens' rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children's responses to rapid successive auditory stimulation requires an examination of both peaks.
Collapse
Affiliation(s)
- Naseem A. Choudhury
- Psychology, SSHS, Ramapo College of New Jersey Mahwah, Mahwah, New Jersey, United States of America
- Center for Molecular & Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Jessica A. Parascando
- Center for Molecular & Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - April A. Benasich
- Center for Molecular & Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| |
Collapse
|
47
|
Cardin V, Smittenaar RC, Orfanidou E, Rönnberg J, Capek CM, Rudner M, Woll B. Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality. Neuroimage 2015; 124:96-106. [PMID: 26348556 DOI: 10.1016/j.neuroimage.2015.08.073] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Revised: 08/23/2015] [Accepted: 08/24/2015] [Indexed: 10/23/2022] Open
Abstract
Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation. Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls. Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus.
Collapse
Affiliation(s)
- Velia Cardin
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | - Rebecca C Smittenaar
- Experimental Psychology, 26 Bedford Way, University College London, London WC1H 0AP, UK
| | - Eleni Orfanidou
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK; School of Psychology, University of Crete, Greece
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Cheryl M Capek
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK
| |
Collapse
|
48
|
Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex. Neuroimage 2015; 120:456-66. [PMID: 26163805 PMCID: PMC4589528 DOI: 10.1016/j.neuroimage.2015.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 06/08/2015] [Accepted: 07/03/2015] [Indexed: 11/20/2022] Open
Abstract
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55–85 dB SPL, binaural 55–85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values.
Collapse
|
49
|
High-field fMRI reveals tonotopically-organized and core auditory cortex in the cat. Hear Res 2015; 325:1-11. [DOI: 10.1016/j.heares.2015.03.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 01/26/2015] [Accepted: 03/05/2015] [Indexed: 01/12/2023]
|
50
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|