1
|
Nourski KV, Steinschneider M, Rhone AE, Dappen ER, Kawasaki H, Howard MA. Processing of auditory novelty in human cortex during a semantic categorization task. Hear Res 2024; 444:108972. [PMID: 38359485 PMCID: PMC10984345 DOI: 10.1016/j.heares.2024.108972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/05/2024] [Accepted: 02/10/2024] [Indexed: 02/17/2024]
Abstract
Auditory semantic novelty - a new meaningful sound in the context of a predictable acoustical environment - can probe neural circuits involved in language processing. Aberrant novelty detection is a feature of many neuropsychiatric disorders. This large-scale human intracranial electrophysiology study examined the spatial distribution of gamma and alpha power and auditory evoked potentials (AEP) associated with responses to unexpected words during performance of semantic categorization tasks. Participants were neurosurgical patients undergoing monitoring for medically intractable epilepsy. Each task included repeatedly presented monosyllabic words from different talkers ("common") and ten words presented only once ("novel"). Targets were words belonging to a specific semantic category. Novelty effects were defined as differences between neural responses to novel and common words. Novelty increased task difficulty and was associated with augmented gamma, suppressed alpha power, and AEP differences broadly distributed across the cortex. Gamma novelty effect had the highest prevalence in planum temporale, posterior superior temporal gyrus (STG) and pars triangularis of the inferior frontal gyrus; alpha in anterolateral Heschl's gyrus (HG), anterior STG and middle anterior cingulate cortex; AEP in posteromedial HG, lower bank of the superior temporal sulcus, and planum polare. Gamma novelty effect had a higher prevalence in dorsal than ventral auditory-related areas. Novelty effects were more pronounced in the left hemisphere. Better novel target detection was associated with reduced gamma novelty effect within auditory cortex and enhanced gamma effect within prefrontal and sensorimotor cortex. Alpha and AEP novelty effects were generally more prevalent in better performing participants. Multiple areas, including auditory cortex on the superior temporal plane, featured AEP novelty effect within the time frame of P3a and N400 scalp-recorded novelty-related potentials. This work provides a detailed account of auditory novelty in a paradigm that directly examined brain regions associated with semantic processing. Future studies may aid in the development of objective measures to assess the integrity of semantic novelty processing in clinical populations.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States.
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Departments of Neurology, Neuroscience, and Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, United States
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Emily R Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, United States
| |
Collapse
|
2
|
Wagner M, Rusiniak M, Higby E, Nourski KV. Sensory processing of native and non-native phonotactic patterns in the alpha and beta frequency bands. Neuropsychologia 2023; 189:108659. [PMID: 37579990 PMCID: PMC10602391 DOI: 10.1016/j.neuropsychologia.2023.108659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 08/03/2023] [Accepted: 08/10/2023] [Indexed: 08/16/2023]
Abstract
The phonotactic patterns of one's native language are established within cortical network processing during development. Sensory processing of native language phonotactic patterns established in memory may be modulated by top-down signals within the alpha and beta frequency bands. To explore sensory processing of phonotactic patterns in the alpha and beta frequency bands, electroencephalograms (EEGs) were recorded from native Polish and native English-speaking adults as they listened to spoken nonwords within same and different nonword pairs. The nonwords contained three phonological sequence onsets that occur in the Polish and English languages (/pət/, /st/, /sət/) and one onset sequence /pt/, which occurs in Polish but not in English onsets. Source localization modeling was used to transform 64-channel EEGs into brain source-level channels. Spectral power values in the low frequencies (2-29 Hz) were analyzed in response to the first nonword in nonword pairs within the context of counterbalanced listening-task conditions, which were presented on separate testing days. For the with-task listening condition, participants performed a behavioral task to the second nonword in the pairs. For the without-task condition participants were only instructed to listen to the stimuli. Thus, in the with-task condition, the first nonword served as a cue for the second nonword, the target stimulus. The results revealed decreased spectral power in the beta frequency band for the with-task condition compared to the without-task condition in response to native language phonotactic patterns. In contrast, the task-related suppression effects in response to the non-native phonotactic pattern /pt/ for the English listeners extended into the alpha frequency band. These effects were localized to source channels in left auditory cortex, the left anterior temporal cortex and the occipital pole. This exploratory study revealed a pattern of results that, if replicated, suggests that native language speech perception is supported by modulations in the alpha and beta frequency bands.
Collapse
Affiliation(s)
- Monica Wagner
- St. John's University, 8000 Utopia Parkway, Queens, NY, 11439, USA.
| | | | - Eve Higby
- California State University, East Bay, 25800 Carlos Bee Blvd, Hayward, CA, 94542, USA.
| | - Kirill V Nourski
- The University of Iowa, 200 Hawkins Dr., Iowa City, IA, 52242, USA.
| |
Collapse
|
3
|
Banks MI, Krause BM, Berger DG, Campbell DI, Boes AD, Bruss JE, Kovach CK, Kawasaki H, Steinschneider M, Nourski KV. Functional geometry of auditory cortical resting state networks derived from intracranial electrophysiology. PLoS Biol 2023; 21:e3002239. [PMID: 37651504 PMCID: PMC10499207 DOI: 10.1371/journal.pbio.3002239] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 09/13/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023] Open
Abstract
Understanding central auditory processing critically depends on defining underlying auditory cortical networks and their relationship to the rest of the brain. We addressed these questions using resting state functional connectivity derived from human intracranial electroencephalography. Mapping recording sites into a low-dimensional space where proximity represents functional similarity revealed a hierarchical organization. At a fine scale, a group of auditory cortical regions excluded several higher-order auditory areas and segregated maximally from the prefrontal cortex. On mesoscale, the proximity of limbic structures to the auditory cortex suggested a limbic stream that parallels the classically described ventral and dorsal auditory processing streams. Identities of global hubs in anterior temporal and cingulate cortex depended on frequency band, consistent with diverse roles in semantic and cognitive processing. On a macroscale, observed hemispheric asymmetries were not specific for speech and language networks. This approach can be applied to multivariate brain data with respect to development, behavior, and disorders.
Collapse
Affiliation(s)
- Matthew I. Banks
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Bryan M. Krause
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - D. Graham Berger
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Declan I. Campbell
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Aaron D. Boes
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Joel E. Bruss
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher K. Kovach
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, New York, United States of America
- Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa, United States of America
| |
Collapse
|
4
|
Keshishian M, Akkol S, Herrero J, Bickel S, Mehta AD, Mesgarani N. Joint, distributed and hierarchically organized encoding of linguistic features in the human auditory cortex. Nat Hum Behav 2023; 7:740-753. [PMID: 36864134 PMCID: PMC10417567 DOI: 10.1038/s41562-023-01520-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 01/05/2023] [Indexed: 03/04/2023]
Abstract
The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.
Collapse
Affiliation(s)
- Menoua Keshishian
- Department of Electrical Engineering, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Serdar Akkol
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
| | - Jose Herrero
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh D Mehta
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
5
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Kawasaki H, Howard MA. Gamma Activation and Alpha Suppression within Human Auditory Cortex during a Speech Classification Task. J Neurosci 2022; 42:5034-5046. [PMID: 35534226 PMCID: PMC9233444 DOI: 10.1523/jneurosci.2187-21.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/11/2022] [Accepted: 04/22/2022] [Indexed: 01/21/2023] Open
Abstract
The dynamics of information flow within the auditory cortical hierarchy associated with speech processing and the emergence of hemispheric specialization remain incompletely understood. To study these questions with high spatiotemporal resolution, intracranial recordings in 29 human neurosurgical patients of both sexes were obtained while subjects performed a semantic classification task. Neural activity was recorded from posteromedial portion of Heschl's gyrus (HGPM) and anterolateral portion of Heschl's gyrus (HGAL), planum temporale (PT), planum polare, insula, and superior temporal gyrus (STG). Responses to monosyllabic words exhibited early gamma power increases and a later suppression of alpha power, envisioned to represent feedforward activity and decreased feedback signaling, respectively. Gamma activation and alpha suppression had distinct magnitude and latency profiles. HGPM and PT had the strongest gamma responses with shortest onset latencies, indicating that they are the earliest auditory cortical processing stages. The origin of attenuated top-down influences in auditory cortex, as indexed by alpha suppression, was in STG and HGAL. Gamma responses and alpha suppression were typically larger to nontarget words than tones. Alpha suppression was uniformly greater to target versus nontarget stimuli. Hemispheric bias for words versus tones and for target versus nontarget words, when present, was left lateralized. Better task performance was associated with increased gamma activity in the left PT and greater alpha suppression in HGPM and HGAL bilaterally. The prominence of alpha suppression during semantic classification and its accessibility for noninvasive electrophysiologic studies suggests that this measure is a promising index of auditory cortical speech processing.SIGNIFICANCE STATEMENT Understanding the dynamics of cortical speech processing requires the use of active tasks. This is the first comprehensive intracranial electroencephalography study to examine cortical activity within the superior temporal plane, lateral superior temporal gyrus, and the insula during a semantic classification task. Distinct gamma activation and alpha suppression profiles clarify the functional organization of feedforward and feedback processing within the auditory cortical hierarchy. Asymmetries in cortical speech processing emerge at early processing stages. Relationships between cortical activity and task performance are interpreted in the context of current models of speech processing. Results lay the groundwork for iEEG studies using connectivity measures of the bidirectional information flow within the auditory processing hierarchy.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
- Iowa Neuroscience Institute, University of Iowa, Iowa City, Iowa 52242
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Ariane E Rhone
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
- Iowa Neuroscience Institute, University of Iowa, Iowa City, Iowa 52242
- Pappajohn Biomedical Institute, University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
6
|
Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, Teng S. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations. Cogn Neuropsychol 2021; 38:468-489. [PMID: 35729704 PMCID: PMC10589059 DOI: 10.1080/02643294.2022.2085085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 03/31/2022] [Accepted: 05/25/2022] [Indexed: 10/17/2022]
Abstract
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
Collapse
Affiliation(s)
- Matthew X. Lowe
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Unlimited Sciences, Colorado Springs, CO
| | - Yalda Mohsenzadeh
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- The Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
- Department of Computer Science, The University of Western Ontario, London, ON, Canada
| | - Benjamin Lahner
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
- Center for Human Brain Health, University of Birmingham, UK
| | - Aude Oliva
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Santani Teng
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Smith-Kettlewell Eye Research Institute (SKERI), San Francisco, CA
| |
Collapse
|
7
|
Hamilton LS, Oganian Y, Hall J, Chang EF. Parallel and distributed encoding of speech across human auditory cortex. Cell 2021; 184:4626-4639.e13. [PMID: 34411517 DOI: 10.1016/j.cell.2021.07.019] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 02/11/2021] [Accepted: 07/19/2021] [Indexed: 12/27/2022]
Abstract
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Collapse
Affiliation(s)
- Liberty S Hamilton
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Yulia Oganian
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Jeffery Hall
- Department of Neurology and Neurosurgery, McGill University Montreal Neurological Institute, Montreal, QC, H3A 2B4, Canada
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
8
|
Nourski KV, Steinschneider M, Rhone AE, Krause BM, Mueller RN, Kawasaki H, Banks MI. Cortical Responses to Vowel Sequences in Awake and Anesthetized States: A Human Intracranial Electrophysiology Study. Cereb Cortex 2021; 31:5435-5448. [PMID: 34117741 PMCID: PMC8568007 DOI: 10.1093/cercor/bhab168] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/22/2021] [Accepted: 05/22/2021] [Indexed: 02/07/2023] Open
Abstract
Elucidating neural signatures of sensory processing across consciousness states is a major focus in neuroscience. Noninvasive human studies using the general anesthetic propofol reveal differential effects on auditory cortical activity, with a greater impact on nonprimary and auditory-related areas than primary auditory cortex. This study used intracranial electroencephalography to examine cortical responses to vowel sequences during induction of general anesthesia with propofol. Subjects were adult neurosurgical patients with intracranial electrodes placed to identify epileptic foci. Data were collected before electrode removal surgery. Stimuli were vowel sequences presented in a target detection task during awake, sedated, and unresponsive states. Averaged evoked potentials (AEPs) and high gamma (70-150 Hz) power were measured in auditory, auditory-related, and prefrontal cortex. In the awake state, AEPs were found throughout studied brain areas; high gamma activity was limited to canonical auditory cortex. Sedation led to a decrease in AEP magnitude. Upon LOC, there was a decrease in the superior temporal gyrus and adjacent auditory-related cortex and a further decrease in AEP magnitude in core auditory cortex, changes in the temporal structure and increased trial-to-trial variability of responses. The findings identify putative biomarkers of LOC and serve as a foundation for future investigations of altered sensory processing.
Collapse
Affiliation(s)
- Kirill V Nourski
- Address correspondence to Kirill V. Nourski, MD, PhD, Department of Neurosurgery, The University of Iowa, 200 Hawkins Dr. 1815 JCP, Iowa City, IA 52242, USA.
| | - Mitchell Steinschneider
- Department of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA
| | - Rashmi N Mueller
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA,Department of Anesthesia, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA,Department of Neuroscience, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA
| |
Collapse
|
9
|
Rocchi F, Oya H, Balezeau F, Billig AJ, Kocsis Z, Jenison RL, Nourski KV, Kovach CK, Steinschneider M, Kikuchi Y, Rhone AE, Dlouhy BJ, Kawasaki H, Adolphs R, Greenlee JDW, Griffiths TD, Howard MA, Petkov CI. Common fronto-temporal effective connectivity in humans and monkeys. Neuron 2021; 109:852-868.e8. [PMID: 33482086 PMCID: PMC7927917 DOI: 10.1016/j.neuron.2020.12.026] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/02/2020] [Accepted: 12/30/2020] [Indexed: 01/24/2023]
Abstract
Human brain pathways supporting language and declarative memory are thought to have differentiated substantially during evolution. However, cross-species comparisons are missing on site-specific effective connectivity between regions important for cognition. We harnessed functional imaging to visualize the effects of direct electrical brain stimulation in macaque monkeys and human neurosurgery patients. We discovered comparable effective connectivity between caudal auditory cortex and both ventro-lateral prefrontal cortex (VLPFC, including area 44) and parahippocampal cortex in both species. Human-specific differences were clearest in the form of stronger hemispheric lateralization effects. In humans, electrical tractography revealed remarkably rapid evoked potentials in VLPFC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidence in monkeys of direct auditory cortex projections to homologous vocalization-responsive regions. The results identify a common effective connectivity signature in human and nonhuman primates, which from auditory cortex appears equally direct to VLPFC and indirect to the hippocampus. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Francesca Rocchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | | - Zsuzsanna Kocsis
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rick L Jenison
- Department of Neuroscience, University of Wisconsin - Madison, Madison, WI, USA
| | - Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | | | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Brian J Dlouhy
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Ralph Adolphs
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
10
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Banks MI, Krause BM, Kawasaki H, Howard MA. Electrophysiology of the Human Superior Temporal Sulcus during Speech Processing. Cereb Cortex 2020; 31:1131-1148. [PMID: 33063098 DOI: 10.1093/cercor/bhaa281] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 08/06/2020] [Accepted: 09/01/2020] [Indexed: 12/20/2022] Open
Abstract
The superior temporal sulcus (STS) is a crucial hub for speech perception and can be studied with high spatiotemporal resolution using electrodes targeting mesial temporal structures in epilepsy patients. Goals of the current study were to clarify functional distinctions between the upper (STSU) and the lower (STSL) bank, hemispheric asymmetries, and activity during self-initiated speech. Electrophysiologic properties were characterized using semantic categorization and dialog-based tasks. Gamma-band activity and alpha-band suppression were used as complementary measures of STS activation. Gamma responses to auditory stimuli were weaker in STSL compared with STSU and had longer onset latencies. Activity in anterior STS was larger during speaking than listening; the opposite pattern was observed more posteriorly. Opposite hemispheric asymmetries were found for alpha suppression in STSU and STSL. Alpha suppression in the STS emerged earlier than in core auditory cortex, suggesting feedback signaling within the auditory cortical hierarchy. STSL was the only region where gamma responses to words presented in the semantic categorization tasks were larger in subjects with superior task performance. More pronounced alpha suppression was associated with better task performance in Heschl's gyrus, superior temporal gyrus, and STS. Functional differences between STSU and STSL warrant their separate assessment in future studies.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | | | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA.,Department of Neuroscience, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA.,Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
11
|
Niesen M, Vander Ghinst M, Bourguignon M, Wens V, Bertels J, Goldman S, Choufani G, Hassid S, De Tiège X. Tracking the Effects of Top-Down Attention on Word Discrimination Using Frequency-tagged Neuromagnetic Responses. J Cogn Neurosci 2020; 32:877-888. [PMID: 31933439 DOI: 10.1162/jocn_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top-down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.
Collapse
|
12
|
Billig AJ, Herrmann B, Rhone AE, Gander PE, Nourski KV, Snoad BF, Kovach CK, Kawasaki H, Howard MA, Johnsrude IS. A Sound-Sensitive Source of Alpha Oscillations in Human Non-Primary Auditory Cortex. J Neurosci 2019; 39:8679-8689. [PMID: 31533976 PMCID: PMC6820204 DOI: 10.1523/jneurosci.0696-19.2019] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 06/09/2019] [Accepted: 07/02/2019] [Indexed: 02/06/2023] Open
Abstract
The functional organization of human auditory cortex can be probed by characterizing responses to various classes of sound at different anatomical locations. Along with histological studies this approach has revealed a primary field in posteromedial Heschl's gyrus (HG) with pronounced induced high-frequency (70-150 Hz) activity and short-latency responses that phase-lock to rapid transient sounds. Low-frequency neural oscillations are also relevant to stimulus processing and information flow, however, their distribution within auditory cortex has not been established. Alpha activity (7-14 Hz) in particular has been associated with processes that may differentially engage earlier versus later levels of the cortical hierarchy, including functional inhibition and the communication of sensory predictions. These theories derive largely from the study of occipitoparietal sources readily detectable in scalp electroencephalography. To characterize the anatomical basis and functional significance of less accessible temporal-lobe alpha activity we analyzed responses to sentences in seven human adults (4 female) with epilepsy who had been implanted with electrodes in superior temporal cortex. In contrast to primary cortex in posteromedial HG, a non-primary field in anterolateral HG was characterized by high spontaneous alpha activity that was strongly suppressed during auditory stimulation. Alpha-power suppression decreased with distance from anterolateral HG throughout superior temporal cortex, and was more pronounced for clear compared to degraded speech. This suppression could not be accounted for solely by a change in the slope of the power spectrum. The differential manifestation and stimulus-sensitivity of alpha oscillations across auditory fields should be accounted for in theories of their generation and function.SIGNIFICANCE STATEMENT To understand how auditory cortex is organized in support of perception, we recorded from patients implanted with electrodes for clinical reasons. This allowed measurement of activity in brain regions at different levels of sensory processing. Oscillations in the alpha range (7-14 Hz) have been associated with functions including sensory prediction and inhibition of regions handling irrelevant information, but their distribution within auditory cortex is not known. A key finding was that these oscillations dominated in one particular non-primary field, anterolateral Heschl's gyrus, and were suppressed when subjects listened to sentences. These results build on our knowledge of the functional organization of auditory cortex and provide anatomical constraints on theories of the generation and function of alpha oscillations.
Collapse
Affiliation(s)
- Alexander J Billig
- The Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada,
| | - Björn Herrmann
- The Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | | | | | | | | | | | | | - Matthew A Howard
- Department of Neurosurgery
- Iowa Neuroscience Institute
- Pappajohn Biomedical Institute, The University of Iowa, Iowa City, Iowa 52242, and
| | - Ingrid S Johnsrude
- The Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 5B7, Canada
| |
Collapse
|
13
|
O'Sullivan J, Herrero J, Smith E, Schevon C, McKhann GM, Sheth SA, Mehta AD, Mesgarani N. Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron 2019; 104:1195-1209.e3. [PMID: 31648900 DOI: 10.1016/j.neuron.2019.09.007] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/11/2019] [Accepted: 09/06/2019] [Indexed: 11/15/2022]
Abstract
Humans can easily focus on one speaker in a multi-talker acoustic environment, but how different areas of the human auditory cortex (AC) represent the acoustic components of mixed speech is unknown. We obtained invasive recordings from the primary and nonprimary AC in neurosurgical patients as they listened to multi-talker speech. We found that neural sites in the primary AC responded to individual speakers in the mixture and were relatively unchanged by attention. In contrast, neural sites in the nonprimary AC were less discerning of individual speakers but selectively represented the attended speaker. Moreover, the encoding of the attended speaker in the nonprimary AC was invariant to the degree of acoustic overlap with the unattended speaker. Finally, this emergent representation of attended speech in the nonprimary AC was linearly predictable from the primary AC responses. Our results reveal the neural computations underlying the hierarchical formation of auditory objects in human AC during multi-talker speech perception.
Collapse
Affiliation(s)
- James O'Sullivan
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Jose Herrero
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Elliot Smith
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, University of Utah, Salt Lake City, UT, USA
| | - Catherine Schevon
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Guy M McKhann
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Sameer A Sheth
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
14
|
Rutten S, Santoro R, Hervais-Adelman A, Formisano E, Golestani N. Cortical encoding of speech enhances task-relevant acoustic information. Nat Hum Behav 2019; 3:974-987. [DOI: 10.1038/s41562-019-0648-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 06/03/2019] [Indexed: 11/09/2022]
|
15
|
Yi HG, Leonard MK, Chang EF. The Encoding of Speech Sounds in the Superior Temporal Gyrus. Neuron 2019; 102:1096-1110. [PMID: 31220442 PMCID: PMC6602075 DOI: 10.1016/j.neuron.2019.04.023] [Citation(s) in RCA: 173] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/08/2019] [Accepted: 04/16/2019] [Indexed: 01/02/2023]
Abstract
The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.
Collapse
Affiliation(s)
- Han Gyol Yi
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
16
|
Abstract
OBJECTIVES Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, the authors used functional near-infrared spectroscopy to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. DESIGN The authors studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. The authors used functional near-infrared spectroscopy to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). The authors also used environmental sounds as a control stimulus. Behavioral measures consisted of the speech reception threshold, consonant-nucleus-consonant words, and AzBio sentence tests measured in quiet. RESULTS Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the consonant-nucleus-consonant words and AzBio sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced the cortical activations in all implanted participants. CONCLUSIONS Together, these data indicate that the responses the authors measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation.
Collapse
|
17
|
De Meo R, Matusz PJ, Knebel JF, Murray MM, Thompson WR, Clarke S. What makes medical students better listeners? Curr Biol 2017; 26:R519-R520. [PMID: 27404234 DOI: 10.1016/j.cub.2016.05.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Diagnosing heart conditions by auscultation is an important clinical skill commonly learnt by medical students. Clinical proficiency for this skill is in decline [1], and new teaching methods are needed. Successful discrimination of heartbeat sounds is believed to benefit mainly from acoustical training [2]. From recent studies of auditory training [3,4] we hypothesized that semantic representations outside the auditory cortex contribute to diagnostic accuracy in cardiac auscultation. To test this hypothesis, we analysed auditory evoked potentials (AEPs) which were recorded from medical students while they diagnosed quadruplets of heartbeat cycles. The comparison of trials with correct (Hits) versus incorrect diagnosis (Misses) revealed a significant difference in brain activity at 280-310 ms after the onset of the second cycle within the left middle frontal gyrus (MFG) and the right prefrontal cortex. This timing and locus suggest that semantic rather than acoustic representations contribute critically to auscultation skills. Thus, teaching auscultation should emphasize the link between the heartbeat sound and its meaning. Beyond cardiac auscultation, this issue is of interest for all fields where subtle but complex perceptual differences identify items in a well-known semantic context.
Collapse
Affiliation(s)
- Rosanna De Meo
- Service of Neuropsychology and Neurorehabilitation, Department of Clinical Neuroscience, CHUV, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology and Department of Clinical Neurosciences, CHUV, Switzerland
| | - Jean-François Knebel
- Service of Neuropsychology and Neurorehabilitation, Department of Clinical Neuroscience, CHUV, Switzerland; The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology and Department of Clinical Neurosciences, CHUV, Switzerland; Department of Ophthalmology, Jules-Gonin Eye Hospital, Lausanne, Switzerland
| | - Micah M Murray
- Service of Neuropsychology and Neurorehabilitation, Department of Clinical Neuroscience, CHUV, Switzerland; The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology and Department of Clinical Neurosciences, CHUV, Switzerland; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Switzerland; Department of Ophthalmology, Jules-Gonin Eye Hospital, Lausanne, Switzerland
| | - W Reid Thompson
- Division of Pediatric Cardiology, Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Stephanie Clarke
- Service of Neuropsychology and Neurorehabilitation, Department of Clinical Neuroscience, CHUV, Switzerland.
| |
Collapse
|
18
|
Cortical Representations of Speech in a Multitalker Auditory Scene. J Neurosci 2017; 37:9189-9196. [PMID: 28821680 DOI: 10.1523/jneurosci.0938-17.2017] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 07/20/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022] Open
Abstract
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.
Collapse
|
19
|
Wagner M, Shafer VL, Haxhari E, Kiprovski K, Behrmann K, Griffiths T. Stability of the Cortical Sensory Waveforms, the P1-N1-P2 Complex and T-Complex, of Auditory Evoked Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2105-2115. [PMID: 28679003 PMCID: PMC5831095 DOI: 10.1044/2017_jslhr-h-16-0056] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 07/18/2016] [Accepted: 02/21/2017] [Indexed: 06/07/2023]
Abstract
Purpose Atypical cortical sensory waveforms reflecting impaired encoding of auditory stimuli may result from inconsistency in cortical response to the acoustic feature changes within spoken words. Thus, the present study assessed intrasubject stability of the P1-N1-P2 complex and T-complex to multiple productions of spoken nonwords in 48 adults to provide benchmarks for future studies probing auditory processing deficits. Method Response trials were split (split epoch averages) for each of 4 word types for each subject and compared for similarity in waveform morphology. Waveform morphology association was assessed between 50 and 600 ms, the time frame reflecting spectro-temporal feature processing for the stimuli used in the study. Results Using approximately 70 trials in each split epoch, the P1-N1-P2 complex was found to be highly stable, with high positive associations found for all subjects for at least 3 word types. The T-complex was more variable, with high positive associations found for all subjects to at least 1 word type. Conclusions The P1-N1-P2 split epochs at group and individual levels and the T-complex at group level can be used to assess consistency of neural response in individuals with auditory processing deficits. The T-complex relative to the P1-N1-P2 complex in individuals can provide information pertaining to phonological processing.
Collapse
|
20
|
Higgins NC, McLaughlin SA, Da Costa S, Stecker GC. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex. Front Syst Neurosci 2017; 11:35. [PMID: 28588457 PMCID: PMC5440574 DOI: 10.3389/fnsys.2017.00035] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 05/08/2017] [Indexed: 11/13/2022] Open
Abstract
Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Hearing and Speech Sciences, Vanderbilt University School of MedicineNashville, TN, United States
| | - Susan A McLaughlin
- Institute for Learning and Brain Sciences, University of WashingtonSeattle, WA, United States
| | - Sandra Da Costa
- Biomedical Imaging Research Center (CIBM), School of Basic Sciences, Ecole Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - G Christopher Stecker
- Department of Hearing and Speech Sciences, Vanderbilt University School of MedicineNashville, TN, United States
| |
Collapse
|
21
|
Nourski KV. Auditory processing in the human cortex: An intracranial electrophysiology perspective. Laryngoscope Investig Otolaryngol 2017; 2:147-156. [PMID: 28894834 PMCID: PMC5562943 DOI: 10.1002/lio2.73] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2016] [Revised: 01/22/2017] [Accepted: 02/02/2017] [Indexed: 12/11/2022] Open
Abstract
Objective Direct electrophysiological recordings in epilepsy patients offer an opportunity to study human auditory cortical processing with unprecedented spatiotemporal resolution. This review highlights recent intracranial studies of human auditory cortex and focuses on its basic response properties as well as modulation of cortical activity during the performance of active behavioral tasks. Data Sources: Literature review. Review Methods: A review of the literature was conducted to summarize the functional organization of human auditory and auditory‐related cortex as revealed using intracranial recordings. Results The tonotopically organized core auditory cortex within the posteromedial portion of Heschl's gyrus represents spectrotemporal features of sounds with high temporal precision and short response latencies. At this level of processing, high gamma (70–150 Hz) activity is minimally modulated by task demands. Non‐core cortex on the lateral surface of the superior temporal gyrus also maintains representation of stimulus acoustic features and, for speech, subserves transformation of acoustic inputs into phonemic representations. High gamma responses in this region are modulated by task requirements. Prefrontal cortex exhibits complex response patterns, related to stimulus intelligibility and task relevance. At this level of auditory processing, activity is strongly modulated by task requirements and reflects behavioral performance. Conclusions Direct recordings from the human brain reveal hierarchical organization of sound processing within auditory and auditory‐related cortex. Level of Evidence Level V
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery The University of Iowa Iowa City IA U.S.A
| |
Collapse
|
22
|
Uhlig CH, Gutschalk A. Transient human auditory cortex activation during volitional attention shifting. PLoS One 2017; 12:e0172907. [PMID: 28273110 PMCID: PMC5342206 DOI: 10.1371/journal.pone.0172907] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 02/02/2017] [Indexed: 11/29/2022] Open
Abstract
While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues.
Collapse
Affiliation(s)
- Christian Harm Uhlig
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
- * E-mail:
| |
Collapse
|
23
|
Nourski KV, Steinschneider M, Rhone AE, Howard Iii MA. Intracranial Electrophysiology of Auditory Selective Attention Associated with Speech Classification Tasks. Front Hum Neurosci 2017; 10:691. [PMID: 28119593 PMCID: PMC5222875 DOI: 10.3389/fnhum.2016.00691] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 12/26/2016] [Indexed: 11/30/2022] Open
Abstract
Auditory selective attention paradigms are powerful tools for elucidating the various stages of speech processing. This study examined electrocorticographic activation during target detection tasks within and beyond auditory cortex. Subjects were nine neurosurgical patients undergoing chronic invasive monitoring for treatment of medically refractory epilepsy. Four subjects had left hemisphere electrode coverage, four had right coverage and one had bilateral coverage. Stimuli were 300 ms complex tones or monosyllabic words, each spoken by a different male or female talker. Subjects were instructed to press a button whenever they heard a target corresponding to a specific stimulus category (e.g., tones, animals, numbers). High gamma (70–150 Hz) activity was simultaneously recorded from Heschl’s gyrus (HG), superior, middle temporal and supramarginal gyri (STG, MTG, SMG), as well as prefrontal cortex (PFC). Data analysis focused on: (1) task effects (non-target words in tone detection vs. semantic categorization task); and (2) target effects (words as target vs. non-target during semantic classification). Responses within posteromedial HG (auditory core cortex) were minimally modulated by task and target. Non-core auditory cortex (anterolateral HG and lateral STG) exhibited sensitivity to task, with a smaller proportion of sites showing target effects. Auditory-related areas (MTG and SMG) and PFC showed both target and, to a lesser extent, task effects, that occurred later than those in the auditory cortex. Significant task and target effects were more prominent in the left hemisphere than in the right. Findings demonstrate a hierarchical organization of speech processing during auditory selective attention.
Collapse
Affiliation(s)
- Kirill V Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa Iowa City, IA, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| | - Ariane E Rhone
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa Iowa City, IA, USA
| | - Matthew A Howard Iii
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA; Pappajohn Biomedical Institute, The University of IowaIowa City, IA, USA
| |
Collapse
|
24
|
Nourski KV, Steinschneider M, Rhone AE. Electrocorticographic Activation within Human Auditory Cortex during Dialog-Based Language and Cognitive Testing. Front Hum Neurosci 2016; 10:202. [PMID: 27199720 PMCID: PMC4854871 DOI: 10.3389/fnhum.2016.00202] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 04/20/2016] [Indexed: 11/25/2022] Open
Abstract
Current models of cortical speech and language processing include multiple regions within the temporal lobe of both hemispheres. Human communication, by necessity, involves complex interactions between regions subserving speech and language processing with those involved in more general cognitive functions. To assess these interactions, we utilized an ecologically salient conversation-based approach. This approach mandates that we first clarify activity patterns at the earliest stages of cortical speech processing. Therefore, we examined high gamma (70–150 Hz) responses within the electrocorticogram (ECoG) recorded simultaneously from Heschl’s gyrus (HG) and lateral surface of the superior temporal gyrus (STG). Subjects were neurosurgical patients undergoing evaluation for treatment of medically intractable epilepsy. They performed an expanded version of the Mini-mental state examination (MMSE), which included additional spelling, naming, and memory-based tasks. ECoG was recorded from HG and the STG using multicontact depth and subdural electrode arrays, respectively. Differences in high gamma activity during listening to the interviewer and the subject’s self-generated verbal responses were quantified for each recording site and across sites within HG and STG. The expanded MMSE produced widespread activation in auditory cortex of both hemispheres. No significant difference was found between activity during listening to the interviewer’s questions and the subject’s answers in posteromedial HG (auditory core cortex). A different pattern was observed throughout anterolateral HG and posterior and middle portions of lateral STG (non-core auditory cortical areas), where activity was significantly greater during listening compared to speaking. No systematic task-specific differences in the degree of suppression during speaking relative to listening were found in posterior and middle STG. Individual sites could, however, exhibit task-related variability in the degree of suppression during speaking compared to listening. The current study demonstrates that ECoG recordings can be acquired in time-efficient dialog-based paradigms, permitting examination of language and cognition in an ecologically salient manner. The results obtained from auditory cortex serve as a foundation for future studies addressing patterns of activity beyond auditory cortex that subserve human communication.
Collapse
Affiliation(s)
- Kirill V Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City IA, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx NY, USA
| | - Ariane E Rhone
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City IA, USA
| |
Collapse
|
25
|
Meister H, Schreitmüller S, Ortmann M, Rählmann S, Walger M. Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers. Front Psychol 2016; 7:301. [PMID: 26973585 PMCID: PMC4777916 DOI: 10.3389/fpsyg.2016.00301] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 02/16/2016] [Indexed: 12/30/2022] Open
Abstract
Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched for their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT) while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT). The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models, it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC). In a post-hoc analysis, the listeners were further separated with respect to their WMC. It appeared that higher capacity could be used in the sense of a compensatory mechanism with respect to the adverse effects of hearing loss, especially with low context speech.
Collapse
Affiliation(s)
- Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Stefan Schreitmüller
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Magdalene Ortmann
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Sebastian Rählmann
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne Cologne, Germany
| | - Martin Walger
- Clinic of Otorhinolaryngology, Head and Neck Surgery, University of Cologne Cologne, Germany
| |
Collapse
|
26
|
Saliba J, Bortfeld H, Levitin DJ, Oghalai JS. Functional near-infrared spectroscopy for neuroimaging in cochlear implant recipients. Hear Res 2016; 338:64-75. [PMID: 26883143 DOI: 10.1016/j.heares.2016.02.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/18/2015] [Accepted: 02/12/2016] [Indexed: 10/22/2022]
Abstract
Functional neuroimaging can provide insight into the neurobiological factors that contribute to the variations in individual hearing outcomes following cochlear implantation. To date, measuring neural activity within the auditory cortex of cochlear implant (CI) recipients has been challenging, primarily because the use of traditional neuroimaging techniques is limited in people with CIs. Functional near-infrared spectroscopy (fNIRS) is an emerging technology that offers benefits in this population because it is non-invasive, compatible with CI devices, and not subject to electrical artifacts. However, there are important considerations to be made when using fNIRS to maximize the signal to noise ratio and to best identify meaningful cortical responses. This review considers these issues, the current data, and future directions for using fNIRS as a clinical application in individuals with CIs. This article is part of a Special Issue entitled <Annual Reviews 2016>.
Collapse
Affiliation(s)
- Joe Saliba
- Department of Otolaryngology - Head and Neck Surgery, Stanford University, Stanford, CA 94305, USA; Department of Otolaryngology - Head and Neck Surgery, McGill University, 1001 Boul. Decarie, Montreal, QC, Canada
| | - Heather Bortfeld
- Psychological Sciences, University of California-Merced, 5200 North Lake Road, Merced, CA 95343, USA
| | - Daniel J Levitin
- Department of Psychology, McGill University, 1205 Avenue Penfield, H3A 1B1, Montreal, QC, Canada
| | - John S Oghalai
- Department of Otolaryngology - Head and Neck Surgery, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
27
|
Shtyrov YY, Stroganova TA. When ultrarapid is ultrarapid: on importance of temporal precision in neuroscience of language. Front Hum Neurosci 2015; 9:576. [PMID: 26539098 PMCID: PMC4612669 DOI: 10.3389/fnhum.2015.00576] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 10/04/2015] [Indexed: 11/21/2022] Open
Affiliation(s)
- Yury Y Shtyrov
- Center of Functionally Integrative Neuroscience (CFIN), Institute for Clinical Medicine, Aarhus University Aarhus, Denmark ; Centre for Cognition and Decision Making, NRU Higher School of Economics Moscow, Russia
| | - Tatyana A Stroganova
- Moscow MEG Center, Moscow State University for Psychology and Education Moscow, Russia
| |
Collapse
|
28
|
Nourski KV, Steinschneider M, Rhone AE, Oya H, Kawasaki H, Howard MA, McMurray B. Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings. BRAIN AND LANGUAGE 2015; 148:37-50. [PMID: 25819402 PMCID: PMC4556541 DOI: 10.1016/j.bandl.2015.03.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2014] [Revised: 02/05/2015] [Accepted: 03/03/2015] [Indexed: 06/01/2023]
Abstract
High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, NY 10461, USA; Department of Neuroscience, Albert Einstein College of Medicine, New York, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Psychology, The University of Iowa, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242, USA; Department of Linguistics, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
29
|
Zoefel B, VanRullen R. EEG oscillations entrain their phase to high-level features of speech sound. Neuroimage 2015; 124:16-23. [PMID: 26341026 DOI: 10.1016/j.neuroimage.2015.08.054] [Citation(s) in RCA: 66] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Revised: 08/07/2015] [Accepted: 08/19/2015] [Indexed: 10/23/2022] Open
Abstract
Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France.
| | - Rufin VanRullen
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France
| |
Collapse
|