1
|
Barkasi M, Bansal A, Jörges B, Harris LR. Online reach adjustments induced by real-time movement sonification. Hum Mov Sci 2024; 96:103250. [PMID: 38964027 DOI: 10.1016/j.humov.2024.103250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 05/30/2024] [Accepted: 06/26/2024] [Indexed: 07/06/2024]
Abstract
Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time ("online") trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections.
Collapse
Affiliation(s)
- Michael Barkasi
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada; Department of Neuroscience, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave., St. Louis 63110-1010, MO, USA.
| | - Ambika Bansal
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| | - Björn Jörges
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| | - Laurence R Harris
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| |
Collapse
|
2
|
Keshishian M, Akkol S, Herrero J, Bickel S, Mehta AD, Mesgarani N. Joint, distributed and hierarchically organized encoding of linguistic features in the human auditory cortex. Nat Hum Behav 2023; 7:740-753. [PMID: 36864134 PMCID: PMC10417567 DOI: 10.1038/s41562-023-01520-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 01/05/2023] [Indexed: 03/04/2023]
Abstract
The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.
Collapse
Affiliation(s)
- Menoua Keshishian
- Department of Electrical Engineering, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Serdar Akkol
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
| | - Jose Herrero
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh D Mehta
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
3
|
Manting CL, Gulyas B, Ullén F, Lundqvist D. Steady-state responses to concurrent melodies: source distribution, top-down, and bottom-up attention. Cereb Cortex 2023; 33:3053-3066. [PMID: 35858223 PMCID: PMC10016039 DOI: 10.1093/cercor/bhac260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/03/2022] [Accepted: 06/03/2022] [Indexed: 11/13/2022] Open
Abstract
Humans can direct attentional resources to a single sound occurring simultaneously among others to extract the most behaviourally relevant information present. To investigate this cognitive phenomenon in a precise manner, we used frequency-tagging to separate neural auditory steady-state responses (ASSRs) that can be traced back to each auditory stimulus, from the neural mix elicited by multiple simultaneous sounds. Using a mixture of 2 frequency-tagged melody streams, we instructed participants to selectively attend to one stream or the other while following the development of the pitch contour. Bottom-up attention towards either stream was also manipulated with salient changes in pitch. Distributed source analyses of magnetoencephalography measurements showed that the effect of ASSR enhancement from top-down driven attention was strongest at the left frontal cortex, while that of bottom-up driven attention was dominant at the right temporal cortex. Furthermore, the degree of ASSR suppression from simultaneous stimuli varied across cortical lobes and hemisphere. The ASSR source distribution changes from temporal-dominance during single-stream perception, to proportionally more activity in the frontal and centro-parietal cortical regions when listening to simultaneous streams. These findings are a step forward to studying cognition in more complex and naturalistic soundscapes using frequency-tagging.
Collapse
Affiliation(s)
| | - Balazs Gulyas
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 17177, Sweden
- Cognitive Neuroimaging Centre (CoNiC), Lee Kong Chien School of Medicine, Nanyang Technological University, Singapore 636921, Singapore
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institutet, Stockholm 17177, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt 60322, Germany
| | - Daniel Lundqvist
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 17177, Sweden
| |
Collapse
|
4
|
Malone SM, Harper J, Iacono WG. Longitudinal stability and change in time-frequency measures from an oddball task during adolescence and early adulthood. Psychophysiology 2023; 60:e14200. [PMID: 36281995 PMCID: PMC9868093 DOI: 10.1111/psyp.14200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 09/24/2022] [Accepted: 10/05/2022] [Indexed: 01/26/2023]
Abstract
Time-frequency representations of electroencephalographic signals lend themselves to a granular analysis of cognitive and psychological processes. Characterizing developmental trajectories of time-frequency measures can thus inform us about the development of the processes involved as well as correlated traits and behaviors. We decomposed electroencephalographic (EEG) activity in a large sample of individuals (N = 1692; 917 females), assessed at approximately 3-year intervals from the age of 11 to their mid-20s. Participants completed an oddball task that elicits a robust P3 response. Principal component analysis served to identify the primary dimensions of time-frequency energy. Component loadings were virtually identical across assessment waves. A common and stable set of time-frequency dynamics thus characterized EEG activity throughout this age range. Trajectories of changes in component scores suggest that aspects of brain development reflected in these components comprise two distinct phases, with marked decreases in component amplitude throughout much of adolescence followed by smaller yet significant rates of decreases into early adulthood. Although the structure of time-frequency activity was stable throughout adolescence and early adulthood, we observed subtle change in component loadings as well. Our findings suggest that striking developmental change in event-related potentials emerges through a gradual change in the magnitude and timing of a stable set of dimensions of time-frequency activity, illustrating the usefulness of time-frequency representations of EEG signals and longitudinal designs for understanding brain development. In addition, we provide proof of concept that trajectories of time-frequency activity can serve as potential endophenotypes for childhood externalizing psychopathology and alcohol use in adolescence and early adulthood.
Collapse
Affiliation(s)
- Stephen M. Malone
- Department of Psychology, University of Minnesota – Twin Cities, 75 East River Road Minneapolis, MN 55455, USA
| | - Jeremy Harper
- Department of Psychiatry and Behavioral Sciences, University of Minnesota – Twin Cities, 2450 Riverside Avenue South, F282/2A West Building Minneapolis, MN 55454, USA
| | - William G. Iacono
- Department of Psychology, University of Minnesota – Twin Cities, 75 East River Road Minneapolis, MN 55455, USA
| |
Collapse
|
5
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
6
|
Rahimi V, Mohammadkhani G, Alaghband Rad J, Mousavi SZ, Khalili ME. Modulation of auditory temporal processing, speech in noise perception, auditory-verbal memory, and reading efficiency by anodal tDCS in children with dyslexia. Neuropsychologia 2022; 177:108427. [PMID: 36410540 DOI: 10.1016/j.neuropsychologia.2022.108427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 10/30/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Dyslexia is a neurodevelopmental disorder that is prevalent in children. It is estimated that 30-50% of individuals diagnosed with dyslexia also manifest an auditory perceptual deficit characteristic of auditory processing disorder (APD). Some studies suggest that defects in basic auditory processing can lead to phonological defects as the most prominent cause of dyslexia. Thus, in some cases, there may be interrelationships between dyslexia and some of the aspects of central auditory processing. In recent years, transcranial direct current stimulation (tDCS) has been used as a safe method for the modulation of central auditory processing aspects in healthy adults and reading skills in children with dyslexia. Therefore, the objectives of our study were to investigate the effect of tDCS on the modulation of different aspects of central auditory processing, aspects of reading, and the relationship between these two domains in dyslexic children with APD. A within-subjects design was employed to investigate the effect of two electrode arrays (the anode on the left STG (AC)/cathode on the right shoulder and anode on the left STG/cathode on the right STG) on auditory temporal processing; speech-in-noise perception, short-term auditory memory; and high-frequency word, low-frequency word, pseudoword, and text reading. The results of this clinical trial showed the modulation of the studied variables in central auditory processing and the accuracy and speed of reading variables compared to the control and sham statuses in both electrode arrays. Our results also showed that the improvement of the accuracy and speed of text reading, as well as the accuracy of pseudoword reading were related to the improvement of speech in noise perception and temporal processing. The results of this research can be effective in clarifying the basis of the neurobiology of dyslexia and, in particular, the hypothesis of the role of basic auditory processing and subsequently the role of the auditory cortex in dyslexia. These results might provide a framework to facilitate behavioral rehabilitation in dyslexic children with APD.
Collapse
Affiliation(s)
- Vida Rahimi
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Science, Tehran, Iran
| | - Ghassem Mohammadkhani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Science, Tehran, Iran.
| | - Javad Alaghband Rad
- Department of Psychiatry, Tehran University of Medical Sciences, Roozbeh Hospital, Tehran, Iran
| | - Seyyedeh Zohre Mousavi
- Department of Speech Therapy, School of Rehabilitation, Iran University of Medical Science, Tehran, Iran
| | - Mohammad Ehsan Khalili
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Science, Tehran, Iran
| |
Collapse
|
7
|
Simon JZ, Commuri V, Kulasingham JP. Time-locked auditory cortical responses in the high-gamma band: A window into primary auditory cortex. Front Neurosci 2022; 16:1075369. [PMID: 36570848 PMCID: PMC9773383 DOI: 10.3389/fnins.2022.1075369] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/24/2022] [Indexed: 12/13/2022] Open
Abstract
Primary auditory cortex is a critical stage in the human auditory pathway, a gateway between subcortical and higher-level cortical areas. Receiving the output of all subcortical processing, it sends its output on to higher-level cortex. Non-invasive physiological recordings of primary auditory cortex using electroencephalography (EEG) and magnetoencephalography (MEG), however, may not have sufficient specificity to separate responses generated in primary auditory cortex from those generated in underlying subcortical areas or neighboring cortical areas. This limitation is important for investigations of effects of top-down processing (e.g., selective-attention-based) on primary auditory cortex: higher-level areas are known to be strongly influenced by top-down processes, but subcortical areas are often assumed to perform strictly bottom-up processing. Fortunately, recent advances have made it easier to isolate the neural activity of primary auditory cortex from other areas. In this perspective, we focus on time-locked responses to stimulus features in the high gamma band (70-150 Hz) and with early cortical latency (∼40 ms), intermediate between subcortical and higher-level areas. We review recent findings from physiological studies employing either repeated simple sounds or continuous speech, obtaining either a frequency following response (FFR) or temporal response function (TRF). The potential roles of top-down processing are underscored, and comparisons with invasive intracranial EEG (iEEG) and animal model recordings are made. We argue that MEG studies employing continuous speech stimuli may offer particular benefits, in that only a few minutes of speech generates robust high gamma responses from bilateral primary auditory cortex, and without measurable interference from subcortical or higher-level areas.
Collapse
Affiliation(s)
- Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, College Park, MD, United States
| | - Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
| | | |
Collapse
|
8
|
Jaroszynski C, Amorim-Leite R, Deman P, Perrone-Bertolotti M, Chabert F, Job-Chapron AS, Minotti L, Hoffmann D, David O, Kahane P. Brain mapping of auditory hallucinations and illusions induced by direct intracortical electrical stimulation. Brain Stimul 2022; 15:1077-1087. [PMID: 35952963 DOI: 10.1016/j.brs.2022.08.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 07/28/2022] [Accepted: 08/03/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND The exact architecture of the human auditory cortex remains a subject of debate, with discrepancies between functional and microstructural studies. In a hierarchical framework for sensory perception, simple sound perception is expected to take place in the primary auditory cortex, while the processing of complex, or more integrated perceptions is proposed to rely on associative and higher-order cortices. OBJECTIVES We hypothesize that auditory symptoms induced by direct electrical stimulation (DES) offer a window into the architecture of the brain networks involved in auditory hallucinations and illusions. The intracranial recordings of these evoked perceptions of varying levels of integration provide the evidence to discuss the theoretical model. METHODS We analyzed SEEG recordings from 50 epileptic patients presenting auditory symptoms induced by DES. First, using the Juelich cytoarchitectonic parcellation, we quantified which regions induced auditory symptoms when stimulated (ROI approach). Then, for each evoked auditory symptom type (illusion or hallucination), we mapped the cortical networks showing concurrent high-frequency activity modulation (HFA approach). RESULTS Although on average, illusions were found more laterally and hallucinations more posteromedially in the temporal lobe, both perceptions were elicited in all levels of the sensory hierarchy, with mixed responses found in the overlap. The spatial range was larger for illusions, both in the ROI and HFA approaches. The limbic system was specific to the hallucinations network, and the inferior parietal lobule was specific to the illusions network. DISCUSSION Our results confirm a network-based organization underlying conscious sound perception, for both simple and complex components. While symptom localization is interesting from an epilepsy semiology perspective, the hallucination-specific modulation of the limbic system is particularly relevant to tinnitus and schizophrenia.
Collapse
Affiliation(s)
- Chloé Jaroszynski
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France.
| | - Ricardo Amorim-Leite
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Pierre Deman
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Marcela Perrone-Bertolotti
- Univ. Grenoble Alpes, CNRS, UMR5105, Laboratoire Psychologie et NeuroCognition, LPNC, 38000, Grenoble, France
| | - Florian Chabert
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Anne-Sophie Job-Chapron
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Lorella Minotti
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Dominique Hoffmann
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Olivier David
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France; Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Philippe Kahane
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France.
| |
Collapse
|
9
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Kawasaki H, Howard MA. Gamma Activation and Alpha Suppression within Human Auditory Cortex during a Speech Classification Task. J Neurosci 2022; 42:5034-5046. [PMID: 35534226 PMCID: PMC9233444 DOI: 10.1523/jneurosci.2187-21.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/11/2022] [Accepted: 04/22/2022] [Indexed: 01/21/2023] Open
Abstract
The dynamics of information flow within the auditory cortical hierarchy associated with speech processing and the emergence of hemispheric specialization remain incompletely understood. To study these questions with high spatiotemporal resolution, intracranial recordings in 29 human neurosurgical patients of both sexes were obtained while subjects performed a semantic classification task. Neural activity was recorded from posteromedial portion of Heschl's gyrus (HGPM) and anterolateral portion of Heschl's gyrus (HGAL), planum temporale (PT), planum polare, insula, and superior temporal gyrus (STG). Responses to monosyllabic words exhibited early gamma power increases and a later suppression of alpha power, envisioned to represent feedforward activity and decreased feedback signaling, respectively. Gamma activation and alpha suppression had distinct magnitude and latency profiles. HGPM and PT had the strongest gamma responses with shortest onset latencies, indicating that they are the earliest auditory cortical processing stages. The origin of attenuated top-down influences in auditory cortex, as indexed by alpha suppression, was in STG and HGAL. Gamma responses and alpha suppression were typically larger to nontarget words than tones. Alpha suppression was uniformly greater to target versus nontarget stimuli. Hemispheric bias for words versus tones and for target versus nontarget words, when present, was left lateralized. Better task performance was associated with increased gamma activity in the left PT and greater alpha suppression in HGPM and HGAL bilaterally. The prominence of alpha suppression during semantic classification and its accessibility for noninvasive electrophysiologic studies suggests that this measure is a promising index of auditory cortical speech processing.SIGNIFICANCE STATEMENT Understanding the dynamics of cortical speech processing requires the use of active tasks. This is the first comprehensive intracranial electroencephalography study to examine cortical activity within the superior temporal plane, lateral superior temporal gyrus, and the insula during a semantic classification task. Distinct gamma activation and alpha suppression profiles clarify the functional organization of feedforward and feedback processing within the auditory cortical hierarchy. Asymmetries in cortical speech processing emerge at early processing stages. Relationships between cortical activity and task performance are interpreted in the context of current models of speech processing. Results lay the groundwork for iEEG studies using connectivity measures of the bidirectional information flow within the auditory processing hierarchy.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
- Iowa Neuroscience Institute, University of Iowa, Iowa City, Iowa 52242
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Ariane E Rhone
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, Iowa 52242
- Iowa Neuroscience Institute, University of Iowa, Iowa City, Iowa 52242
- Pappajohn Biomedical Institute, University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
10
|
Liu Q, Ulloa A, Horwitz B. The Spatiotemporal Neural Dynamics of Intersensory Attention Capture of Salient Stimuli: A Large-Scale Auditory-Visual Modeling Study. Front Comput Neurosci 2022; 16:876652. [PMID: 35645750 PMCID: PMC9133449 DOI: 10.3389/fncom.2022.876652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 11/13/2022] Open
Abstract
The spatiotemporal dynamics of the neural mechanisms underlying endogenous (top-down) and exogenous (bottom-up) attention, and how attention is controlled or allocated in intersensory perception are not fully understood. We investigated these issues using a biologically realistic large-scale neural network model of visual-auditory object processing of short-term memory. We modeled and incorporated into our visual-auditory object-processing model the temporally changing neuronal mechanisms for the control of endogenous and exogenous attention. The model successfully performed various bimodal working memory tasks, and produced simulated behavioral and neural results that are consistent with experimental findings. Simulated fMRI data were generated that constitute predictions that human experiments could test. Furthermore, in our visual-auditory bimodality simulations, we found that increased working memory load in one modality would reduce the distraction from the other modality, and a possible network mediating this effect is proposed based on our model.
Collapse
Affiliation(s)
- Qin Liu
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Department of Physics, University of Maryland, College Park, College Park, MD, United States
| | - Antonio Ulloa
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Center for Information Technology, National Institutes of Health, Bethesda, MD, United States
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- *Correspondence: Barry Horwitz,
| |
Collapse
|
11
|
Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, Teng S. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations. Cogn Neuropsychol 2021; 38:468-489. [PMID: 35729704 PMCID: PMC10589059 DOI: 10.1080/02643294.2022.2085085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 03/31/2022] [Accepted: 05/25/2022] [Indexed: 10/17/2022]
Abstract
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
Collapse
Affiliation(s)
- Matthew X. Lowe
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Unlimited Sciences, Colorado Springs, CO
| | - Yalda Mohsenzadeh
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- The Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
- Department of Computer Science, The University of Western Ontario, London, ON, Canada
| | - Benjamin Lahner
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
- Center for Human Brain Health, University of Birmingham, UK
| | - Aude Oliva
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Santani Teng
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Smith-Kettlewell Eye Research Institute (SKERI), San Francisco, CA
| |
Collapse
|
12
|
Khalighinejad B, Patel P, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Functional characterization of human Heschl's gyrus in response to natural speech. Neuroimage 2021; 235:118003. [PMID: 33789135 PMCID: PMC8608271 DOI: 10.1016/j.neuroimage.2021.118003] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 03/23/2021] [Accepted: 03/25/2021] [Indexed: 01/11/2023] Open
Abstract
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Prachi Patel
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Jose L. Herrero
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Ashesh D. Mehta
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States,Corresponding author at: Department of Electrical Engineering, Columbia University, New York, NY, United States. (B. Khalighinejad), (P. Patel), (J.L. Herrero), (S. Bickel), (A.D. Mehta), (N. Mesgarani)
| |
Collapse
|
13
|
Nourski KV, Steinschneider M, Rhone AE, Krause BM, Mueller RN, Kawasaki H, Banks MI. Cortical Responses to Vowel Sequences in Awake and Anesthetized States: A Human Intracranial Electrophysiology Study. Cereb Cortex 2021; 31:5435-5448. [PMID: 34117741 PMCID: PMC8568007 DOI: 10.1093/cercor/bhab168] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/22/2021] [Accepted: 05/22/2021] [Indexed: 02/07/2023] Open
Abstract
Elucidating neural signatures of sensory processing across consciousness states is a major focus in neuroscience. Noninvasive human studies using the general anesthetic propofol reveal differential effects on auditory cortical activity, with a greater impact on nonprimary and auditory-related areas than primary auditory cortex. This study used intracranial electroencephalography to examine cortical responses to vowel sequences during induction of general anesthesia with propofol. Subjects were adult neurosurgical patients with intracranial electrodes placed to identify epileptic foci. Data were collected before electrode removal surgery. Stimuli were vowel sequences presented in a target detection task during awake, sedated, and unresponsive states. Averaged evoked potentials (AEPs) and high gamma (70-150 Hz) power were measured in auditory, auditory-related, and prefrontal cortex. In the awake state, AEPs were found throughout studied brain areas; high gamma activity was limited to canonical auditory cortex. Sedation led to a decrease in AEP magnitude. Upon LOC, there was a decrease in the superior temporal gyrus and adjacent auditory-related cortex and a further decrease in AEP magnitude in core auditory cortex, changes in the temporal structure and increased trial-to-trial variability of responses. The findings identify putative biomarkers of LOC and serve as a foundation for future investigations of altered sensory processing.
Collapse
Affiliation(s)
- Kirill V Nourski
- Address correspondence to Kirill V. Nourski, MD, PhD, Department of Neurosurgery, The University of Iowa, 200 Hawkins Dr. 1815 JCP, Iowa City, IA 52242, USA.
| | - Mitchell Steinschneider
- Department of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA
| | - Rashmi N Mueller
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA,Department of Anesthesia, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA,Department of Neuroscience, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705, USA
| |
Collapse
|
14
|
Roswandowitz C, Swanborough H, Frühholz S. Categorizing human vocal signals depends on an integrated auditory-frontal cortical network. Hum Brain Mapp 2021; 42:1503-1517. [PMID: 33615612 PMCID: PMC7927295 DOI: 10.1002/hbm.25309] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 11/20/2020] [Accepted: 11/25/2020] [Indexed: 11/30/2022] Open
Abstract
Voice signals are relevant for auditory communication and suggested to be processed in dedicated auditory cortex (AC) regions. While recent reports highlighted an additional role of the inferior frontal cortex (IFC), a detailed description of the integrated functioning of the AC-IFC network and its task relevance for voice processing is missing. Using neuroimaging, we tested sound categorization while human participants either focused on the higher-order vocal-sound dimension (voice task) or feature-based intensity dimension (loudness task) while listening to the same sound material. We found differential involvements of the AC and IFC depending on the task performed and whether the voice dimension was of task relevance or not. First, when comparing neural vocal-sound processing of our task-based with previously reported passive listening designs we observed highly similar cortical activations in the AC and IFC. Second, during task-based vocal-sound processing we observed voice-sensitive responses in the AC and IFC whereas intensity processing was restricted to distinct AC regions. Third, the IFC flexibly adapted to the vocal-sounds' task relevance, being only active when the voice dimension was task relevant. Forth and finally, connectivity modeling revealed that vocal signals independent of their task relevance provided significant input to bilateral AC. However, only when attention was on the voice dimension, we found significant modulations of auditory-frontal connections. Our findings suggest an integrated auditory-frontal network to be essential for behaviorally relevant vocal-sounds processing. The IFC seems to be an important hub of the extended voice network when representing higher-order vocal objects and guiding goal-directed behavior.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
| | - Huw Swanborough
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
| | - Sascha Frühholz
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
15
|
Huang MX, Huang CW, Harrington DL, Nichols S, Robb-Swan A, Angeles-Quinto A, Le L, Rimmele C, Drake A, Song T, Huang JW, Clifford R, Ji Z, Cheng CK, Lerman I, Yurgil KA, Lee RR, Baker DG. Marked Increases in Resting-State MEG Gamma-Band Activity in Combat-Related Mild Traumatic Brain Injury. Cereb Cortex 2021; 30:283-295. [PMID: 31041986 DOI: 10.1093/cercor/bhz087] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Revised: 03/29/2019] [Accepted: 04/01/2019] [Indexed: 01/08/2023] Open
Abstract
Combat-related mild traumatic brain injury (mTBI) is a leading cause of sustained impairments in military service members and veterans. Recent animal studies show that GABA-ergic parvalbumin-positive interneurons are susceptible to brain injury, with damage causing abnormal increases in spontaneous gamma-band (30-80 Hz) activity. We investigated spontaneous gamma activity in individuals with mTBI using high-resolution resting-state magnetoencephalography source imaging. Participants included 25 symptomatic individuals with chronic combat-related blast mTBI and 35 healthy controls with similar combat experiences. Compared with controls, gamma activity was markedly elevated in mTBI participants throughout frontal, parietal, temporal, and occipital cortices, whereas gamma activity was reduced in ventromedial prefrontal cortex. Across groups, greater gamma activity correlated with poorer performances on tests of executive functioning and visuospatial processing. Many neurocognitive associations, however, were partly driven by the higher incidence of mTBI participants with both higher gamma activity and poorer cognition, suggesting that expansive upregulation of gamma has negative repercussions for cognition particularly in mTBI. This is the first human study to demonstrate abnormal resting-state gamma activity in mTBI. These novel findings suggest the possibility that abnormal gamma activities may be a proxy for GABA-ergic interneuron dysfunction and a promising neuroimaging marker of insidious mild head injuries.
Collapse
Affiliation(s)
- Ming-Xiong Huang
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Charles W Huang
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Deborah L Harrington
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Sharon Nichols
- Department of Neuroscience, University of California, San Diego, CA, USA
| | - Ashley Robb-Swan
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Annemarie Angeles-Quinto
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Lu Le
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, CA, USA
| | - Carl Rimmele
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, CA, USA
| | - Angela Drake
- Cedar Sinai Medical Group Chronic Pain Program, Beverly Hills, CA, USA
| | - Tao Song
- Department of Radiology, University of California, San Diego, CA, USA
| | - Jeffrey W Huang
- Department of Computer Science, Columbia University, New York, NY, USA
| | - Royce Clifford
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA
| | - Zhengwei Ji
- Department of Radiology, University of California, San Diego, CA, USA
| | - Chung-Kuan Cheng
- Department of Computer Science and Engineering, University of California, San Diego, CA, USA
| | - Imanuel Lerman
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA
| | - Kate A Yurgil
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA.,Department of Psychological Sciences, Loyola University, New Orleans, LA, USA
| | - Roland R Lee
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Dewleen G Baker
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA
| |
Collapse
|
16
|
Neural Correlates of Vocal Auditory Feedback Processing: Unique Insights from Electrocorticography Recordings in a Human Cochlear Implant User. eNeuro 2021; 8:ENEURO.0181-20.2020. [PMID: 33419861 PMCID: PMC7877459 DOI: 10.1523/eneuro.0181-20.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/21/2022] Open
Abstract
There is considerable interest in understanding cortical processing and the function of top-down and bottom-up human neural circuits that control speech production. Research efforts to investigate these circuits are aided by analysis of spectro-temporal response characteristics of neural activity recorded by electrocorticography (ECoG). Further, cortical processing may be altered in the case of hearing-impaired cochlear implant (CI) users, as electric excitation of the auditory nerve creates a markedly different neural code for speech compared with that of the functionally intact hearing system. Studies of cortical activity in CI users typically record scalp potentials and are hampered by stimulus artifact contamination and by spatiotemporal filtering imposed by the skull. We present a unique case of a CI user who required direct recordings from the cortical surface using subdural electrodes implanted for epilepsy assessment. Using experimental conditions where the subject vocalized in the presence (CIs ON) or absence (CIs OFF) of auditory feedback, or listened to playback of self-vocalizations without production, we observed ECoG activity primarily in γ (32–70 Hz) and high γ (70–150 Hz) bands at focal regions on the lateral surface of the superior temporal gyrus (STG). High γ band responses differed in their amplitudes across conditions and cortical sites, possibly reflecting different rates of stimulus presentation and differing levels of neural adaptation. STG γ responses to playback and vocalization with auditory feedback were not different from responses to vocalization without feedback, indicating this activity reflects not only auditory, but also attentional, efference-copy, and sensorimotor processing during speech production.
Collapse
|
17
|
English A, Drummond PD. Acoustic startle stimuli inhibit pain but do not alter nociceptive flexion reflexes to sural nerve stimulation. Psychophysiology 2021; 58:e13757. [PMID: 33448016 DOI: 10.1111/psyp.13757] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 11/30/2020] [Accepted: 12/05/2020] [Indexed: 01/08/2023]
Abstract
Acoustic startle stimuli inhibit pain, but whether this is due to a cross-modal inhibitory process or some other mechanism is uncertain. To investigate this, electrical stimulation of the sural nerve either preceded or followed an acoustic startle stimulus (by 200 ms) or was presented alone in 30 healthy participants. Five electrical stimuli, five acoustic startle stimuli, 10 startle + electrical stimuli, and 10 electrical + startle stimuli were presented in mixed order at intervals of 30-60 s. Effects of the startle stimulus on pain ratings, pupillary dilatation and nociceptive flexion reflexes to the electric shock were assessed. The acoustic startle stimulus inhibited electrically evoked pain to the ensuing electric shock (p < .001), and the electrical stimulus inhibited the perceived loudness of a subsequent acoustic startle stimulus (p < .05). However, the startle stimulus did not affect electrically evoked pain when presented 200 ms after the electric shock, and electrically evoked pain did not influence the perceived loudness of a prior startle stimulus. Furthermore, stimulus order did not influence the pupillary responses or nociceptive flexion reflexes. These findings suggest that acoustic startle stimuli transiently inhibit nociceptive processing and, conversely, that electrical stimuli inhibit subsequent auditory processing. These inhibitory effects do not seem to involve spinal gating as nociceptive flexion reflexes to the electric shock were unaffected by stimulus order. Thus, cross-modal interactions at convergence points in the brainstem or higher centers may inhibit responses to the second stimulus in a two-stimulus train.
Collapse
Affiliation(s)
- Amber English
- Discipline of Psychology, Murdoch University, Perth, WA, Australia
| | - Peter D Drummond
- Discipline of Psychology, Murdoch University, Perth, WA, Australia
| |
Collapse
|
18
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Banks MI, Krause BM, Kawasaki H, Howard MA. Electrophysiology of the Human Superior Temporal Sulcus during Speech Processing. Cereb Cortex 2020; 31:1131-1148. [PMID: 33063098 DOI: 10.1093/cercor/bhaa281] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 08/06/2020] [Accepted: 09/01/2020] [Indexed: 12/20/2022] Open
Abstract
The superior temporal sulcus (STS) is a crucial hub for speech perception and can be studied with high spatiotemporal resolution using electrodes targeting mesial temporal structures in epilepsy patients. Goals of the current study were to clarify functional distinctions between the upper (STSU) and the lower (STSL) bank, hemispheric asymmetries, and activity during self-initiated speech. Electrophysiologic properties were characterized using semantic categorization and dialog-based tasks. Gamma-band activity and alpha-band suppression were used as complementary measures of STS activation. Gamma responses to auditory stimuli were weaker in STSL compared with STSU and had longer onset latencies. Activity in anterior STS was larger during speaking than listening; the opposite pattern was observed more posteriorly. Opposite hemispheric asymmetries were found for alpha suppression in STSU and STSL. Alpha suppression in the STS emerged earlier than in core auditory cortex, suggesting feedback signaling within the auditory cortical hierarchy. STSL was the only region where gamma responses to words presented in the semantic categorization tasks were larger in subjects with superior task performance. More pronounced alpha suppression was associated with better task performance in Heschl's gyrus, superior temporal gyrus, and STS. Functional differences between STSU and STSL warrant their separate assessment in future studies.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | | | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA.,Department of Neuroscience, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA.,Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
19
|
Ortiz-Mantilla S, Realpe-Bonilla T, Benasich AA. Early Interactive Acoustic Experience with Non-speech Generalizes to Speech and Confers a Syllabic Processing Advantage at 9 Months. Cereb Cortex 2020; 29:1789-1801. [PMID: 30722000 PMCID: PMC6418390 DOI: 10.1093/cercor/bhz001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/04/2018] [Accepted: 01/07/2019] [Indexed: 12/19/2022] Open
Abstract
During early development, the infant brain is highly plastic and sensory experiences modulate emerging cortical maps, enhancing processing efficiency as infants set up key linguistic precursors. Early interactive acoustic experience (IAE) with spectrotemporally-modulated non-speech has been shown to facilitate optimal acoustic processing and generalizes to novel non-speech sounds at 7-months-of-age. Here we demonstrate that effects of non-speech IAE endure well beyond the immediate training period and robustly generalize to speech processing. Infants who received non-speech IAE differed at 9-months-of-age from both naïve controls and those with only passive acoustic exposure, demonstrating broad modulation of oscillatory dynamics. For the standard syllable, increased high-gamma (>70 Hz) power within auditory cortices indicates that IAE fosters native speech processing, facilitating establishment of phonemic representations. The higher left beta power seen may reflect increased linking of sensory information and corresponding articulatory patterns, while bilateral decreases in theta power suggest more mature automatized speech processing, as less neuronal resources were allocated to process syllabic information. For the deviant syllable, left-lateralized gamma (<70 Hz) enhancement suggests IAE promotes phonemic-related discrimination abilities. Theta power increases in right auditory cortex, known for favoring slow-rate decoding, implies IAE facilitates the more demanding processing of the sporadic deviant syllable.
Collapse
Affiliation(s)
- Silvia Ortiz-Mantilla
- Center for Molecular & Behavioral Neuroscience, Rutgers University-Newark, 197 University Avenue, Newark, NJ, USA
| | - Teresa Realpe-Bonilla
- Center for Molecular & Behavioral Neuroscience, Rutgers University-Newark, 197 University Avenue, Newark, NJ, USA
| | - April A Benasich
- Center for Molecular & Behavioral Neuroscience, Rutgers University-Newark, 197 University Avenue, Newark, NJ, USA
| |
Collapse
|
20
|
Di Liberto GM, Pelofi C, Bianco R, Patel P, Mehta AD, Herrero JL, de Cheveigné A, Shamma S, Mesgarani N. Cortical encoding of melodic expectations in human temporal cortex. eLife 2020; 9:e51784. [PMID: 32122465 PMCID: PMC7053998 DOI: 10.7554/elife.51784] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 01/20/2020] [Indexed: 01/14/2023] Open
Abstract
Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
| | - Claire Pelofi
- Department of Psychology, New York UniversityNew YorkUnited States
- Institut de Neurosciences des Système, UMR S 1106, INSERM, Aix Marseille UniversitéMarseilleFrance
| | | | - Prachi Patel
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashesh D Mehta
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Jose L Herrero
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Alain de Cheveigné
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- UCL Ear InstituteLondonUnited Kingdom
| | - Shihab Shamma
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- Institute for Systems Research, Electrical and Computer Engineering, University of MarylandCollege ParkUnited States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
21
|
Banks MI, Krause BM, Endemann CM, Campbell DI, Kovach CK, Dyken ME, Kawasaki H, Nourski KV. Cortical functional connectivity indexes arousal state during sleep and anesthesia. Neuroimage 2020; 211:116627. [PMID: 32045640 DOI: 10.1016/j.neuroimage.2020.116627] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 01/28/2020] [Accepted: 02/07/2020] [Indexed: 02/06/2023] Open
Abstract
Disruption of cortical connectivity likely contributes to loss of consciousness (LOC) during both sleep and general anesthesia, but the degree of overlap in the underlying mechanisms is unclear. Both sleep and anesthesia comprise states of varying levels of arousal and consciousness, including states of largely maintained conscious experience (sleep: N1, REM; anesthesia: sedated but responsive) as well as states of substantially reduced conscious experience (sleep: N2/N3; anesthesia: unresponsive). Here, we tested the hypotheses that (1) cortical connectivity will exhibit clear changes when transitioning into states of reduced consciousness, and (2) these changes will be similar for arousal states of comparable levels of consciousness during sleep and anesthesia. Using intracranial recordings from five adult neurosurgical patients, we compared resting state cortical functional connectivity (as measured by weighted phase lag index, wPLI) in the same subjects across arousal states during natural sleep [wake (WS), N1, N2, N3, REM] and propofol anesthesia [pre-drug wake (WA), sedated/responsive (S), and unresponsive (U)]. Analysis of alpha-band connectivity indicated a transition boundary distinguishing states of maintained and reduced conscious experience in both sleep and anesthesia. In wake states WS and WA, alpha-band wPLI within the temporal lobe was dominant. This pattern was largely unchanged in N1, REM, and S. Transitions into states of reduced consciousness N2, N3, and U were characterized by dramatic changes in connectivity, with dominant connections shifting to prefrontal cortex. Secondary analyses indicated similarities in reorganization of cortical connectivity in sleep and anesthesia. Shifts from temporal to frontal cortical connectivity may reflect impaired sensory processing in states of reduced consciousness. The data indicate that functional connectivity can serve as a biomarker of arousal state and suggest common mechanisms of LOC in sleep and anesthesia.
Collapse
Affiliation(s)
- Matthew I Banks
- Department of Anesthesiology, University of Wisconsin, Madison, WI, 52704, USA; Department of Neuroscience, University of Wisconsin, Madison, WI, 53706, USA.
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin, Madison, WI, 52704, USA
| | | | - Declan I Campbell
- Department of Anesthesiology, University of Wisconsin, Madison, WI, 52704, USA
| | | | - Mark Eric Dyken
- Department of Neurology, The University of Iowa, Iowa City, IA, 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, 52242, USA
| | - Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, 52242, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, 52242, USA
| |
Collapse
|
22
|
Joint Representation of Spatial and Phonetic Features in the Human Core Auditory Cortex. Cell Rep 2020; 24:2051-2062.e2. [PMID: 30134167 DOI: 10.1016/j.celrep.2018.07.076] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 04/09/2018] [Accepted: 07/22/2018] [Indexed: 12/12/2022] Open
Abstract
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing.
Collapse
|
23
|
O'Sullivan J, Herrero J, Smith E, Schevon C, McKhann GM, Sheth SA, Mehta AD, Mesgarani N. Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron 2019; 104:1195-1209.e3. [PMID: 31648900 DOI: 10.1016/j.neuron.2019.09.007] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/11/2019] [Accepted: 09/06/2019] [Indexed: 11/15/2022]
Abstract
Humans can easily focus on one speaker in a multi-talker acoustic environment, but how different areas of the human auditory cortex (AC) represent the acoustic components of mixed speech is unknown. We obtained invasive recordings from the primary and nonprimary AC in neurosurgical patients as they listened to multi-talker speech. We found that neural sites in the primary AC responded to individual speakers in the mixture and were relatively unchanged by attention. In contrast, neural sites in the nonprimary AC were less discerning of individual speakers but selectively represented the attended speaker. Moreover, the encoding of the attended speaker in the nonprimary AC was invariant to the degree of acoustic overlap with the unattended speaker. Finally, this emergent representation of attended speech in the nonprimary AC was linearly predictable from the primary AC responses. Our results reveal the neural computations underlying the hierarchical formation of auditory objects in human AC during multi-talker speech perception.
Collapse
Affiliation(s)
- James O'Sullivan
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Jose Herrero
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Elliot Smith
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, University of Utah, Salt Lake City, UT, USA
| | - Catherine Schevon
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Guy M McKhann
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Sameer A Sheth
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
24
|
Stickel S, Weismann P, Kellermann T, Regenbogen C, Habel U, Freiherr J, Chechko N. Audio-visual and olfactory-visual integration in healthy participants and subjects with autism spectrum disorder. Hum Brain Mapp 2019; 40:4470-4486. [PMID: 31301203 PMCID: PMC6865810 DOI: 10.1002/hbm.24715] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Revised: 05/23/2019] [Accepted: 07/01/2019] [Indexed: 01/22/2023] Open
Abstract
The human capacity to integrate sensory signals has been investigated with respect to different sensory modalities. A common denominator of the neural network underlying the integration of sensory clues has yet to be identified. Additionally, brain imaging data from patients with autism spectrum disorder (ASD) do not cover disparities in neuronal sensory processing. In this fMRI study, we compared the underlying neural networks of both olfactory-visual and auditory-visual integration in patients with ASD and a group of matched healthy participants. The aim was to disentangle sensory-specific networks so as to derive a potential (amodal) common source of multisensory integration (MSI) and to investigate differences in brain networks with sensory processing in individuals with ASD. In both groups, similar neural networks were found to be involved in the olfactory-visual and auditory-visual integration processes, including the primary visual cortex, the inferior parietal sulcus (IPS), and the medial and inferior frontal cortices. Amygdala activation was observed specifically during olfactory-visual integration, with superior temporal activation having been seen during auditory-visual integration. A dynamic causal modeling analysis revealed a nonlinear top-down IPS modulation of the connection between the respective primary sensory regions in both experimental conditions and in both groups. Thus, we demonstrate that MSI has shared neural sources across olfactory-visual and audio-visual stimulation in patients and controls. The enhanced recruitment of the IPS to modulate changes between areas is relevant to sensory perception. Our results also indicate that, with respect to MSI processing, adults with ASD do not significantly differ from their healthy counterparts.
Collapse
Affiliation(s)
- Susanne Stickel
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Pauline Weismann
- Department of Psychiatry and PsychotherapyFriedrich‐Alexander‐Universität Erlangen‐NürnbergErlangenGermany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Christina Regenbogen
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
- Department of Clinical NeuroscienceKarolinska InstitutetStockholmSweden
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Jessica Freiherr
- Department of Psychiatry and PsychotherapyFriedrich‐Alexander‐Universität Erlangen‐NürnbergErlangenGermany
- Sensory AnalyticsFraunhofer Institute for Process Engineering and Packaging IVVFreisingGermany
| | - Natalya Chechko
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| |
Collapse
|
25
|
Malmierca MS, Niño-Aguillón BE, Nieto-Diego J, Porteros Á, Pérez-González D, Escera C. Pattern-sensitive neurons reveal encoding of complex auditory regularities in the rat inferior colliculus. Neuroimage 2019; 184:889-900. [DOI: 10.1016/j.neuroimage.2018.10.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 10/04/2018] [Indexed: 10/28/2022] Open
|
26
|
Auditory Predictive Coding across Awareness States under Anesthesia: An Intracranial Electrophysiology Study. J Neurosci 2018; 38:8441-8452. [PMID: 30126970 DOI: 10.1523/jneurosci.0967-18.2018] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 07/03/2018] [Accepted: 08/11/2018] [Indexed: 12/20/2022] Open
Abstract
The systems-level mechanisms underlying loss of consciousness (LOC) under anesthesia remain unclear. General anesthetics suppress sensory responses within higher-order cortex and feedback connections, both critical elements of predictive coding hypotheses of conscious perception. Responses to auditory novelty may offer promise as biomarkers for consciousness. This study examined anesthesia-induced changes in auditory novelty responses over short (local deviant [LD]) and long (global deviant [GD]) time scales, envisioned to engage preattentive and conscious levels of processing, respectively. Electrocorticographic recordings were obtained in human neurosurgical patients (3 male, 3 female) from four hierarchical processing levels: core auditory cortex, non-core auditory cortex, auditory-related, and PFC. Stimuli were vowel patterns incorporating deviants within and across stimuli (LD and GD). Subjects were presented with stimuli while awake, and during sedation (responsive) and following LOC (unresponsive) under propofol anesthesia. LD and GD effects were assayed as the averaged evoked potential and high gamma (70-150 Hz) activity. In the awake state, LD and GD effects were present in all recorded regions, with averaged evoked potential effects more broadly distributed than high gamma activity. Under sedation, LD effects were preserved in all regions, except PFC. LOC was accompanied by loss of LD effects outside of auditory cortex. By contrast, GD effects were markedly suppressed under sedation in all regions and were absent following LOC. Thus, although the presence of GD effects is indicative of being awake, its absence is not indicative of LOC. Loss of LD effects in higher-order cortical areas may constitute an alternative biomarker of LOC.SIGNIFICANCE STATEMENT Development of a biomarker that indexes changes in the brain upon loss of consciousness (LOC) under general anesthesia has broad implications for elucidating the neural basis of awareness and clinical relevance to mechanisms of sleep, coma, and disorders of consciousness. Using intracranial recordings from neurosurgery patients, we investigated changes in the activation of cortical networks involved in auditory novelty detection over short (local deviance) and long (global deviance) time scales associated with sedation and LOC under propofol anesthesia. Our results indicate that, whereas the presence of global deviance effects can index awareness, their loss cannot serve as a biomarker for LOC. The dramatic reduction of local deviance effects in areas beyond auditory cortex may constitute an alternative biomarker of LOC.
Collapse
|
27
|
Rimmele JM, Gross J, Molholm S, Keitel A. Editorial: Brain Oscillations in Human Communication. Front Hum Neurosci 2018; 12:39. [PMID: 29467639 PMCID: PMC5808291 DOI: 10.3389/fnhum.2018.00039] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 01/24/2018] [Indexed: 11/22/2022] Open
Affiliation(s)
- Johanna M Rimmele
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics (MPG), Frankfurt am Main, Germany
| | - Joachim Gross
- Institut für Biomagnetismus und Biosignalanalyse, Universitätsklinikum Münster, Münster, Germany
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Anne Keitel
- Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|