1
|
Lankinen K, Ahveninen J, Jas M, Raij T, Ahlfors SP. Neuronal modeling of magnetoencephalography responses in auditory cortex to auditory and visual stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.16.545371. [PMID: 37398025 PMCID: PMC10312796 DOI: 10.1101/2023.06.16.545371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Previous studies have demonstrated that auditory cortex activity can be influenced by crosssensory visual inputs. Intracortical recordings in non-human primates (NHP) have suggested a bottom-up feedforward (FF) type laminar profile for auditory evoked but top-down feedback (FB) type for cross-sensory visual evoked activity in the auditory cortex. To test whether this principle applies also to humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex region of interest, auditory evoked responses showed peaks at 37 and 90 ms and cross-sensory visual responses at 125 ms. The inputs to the auditory cortex were then modeled through FF and FB type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which consists of a neocortical circuit model linking the cellular- and circuit-level mechanisms to MEG. The HNN models suggested that the measured auditory response could be explained by an FF input followed by an FB input, and the crosssensory visual response by an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG/EEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
2
|
Roth BJ. Biomagnetism: The First Sixty Years. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094218. [PMID: 37177427 PMCID: PMC10181075 DOI: 10.3390/s23094218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023]
Abstract
Biomagnetism is the measurement of the weak magnetic fields produced by nerves and muscle. The magnetic field of the heart-the magnetocardiogram (MCG)-is the largest biomagnetic signal generated by the body and was the first measured. Magnetic fields have been detected from isolated tissue, such as a peripheral nerve or cardiac muscle, and these studies have provided insights into the fundamental properties of biomagnetism. The magnetic field of the brain-the magnetoencephalogram (MEG)-has generated much interest and has potential clinical applications to epilepsy, migraine, and psychiatric disorders. The biomagnetic inverse problem, calculating the electrical sources inside the brain from magnetic field recordings made outside the head, is difficult, but several techniques have been introduced to solve it. Traditionally, biomagnetic fields are recorded using superconducting quantum interference device (SQUID) magnetometers, but recently, new sensors have been developed that allow magnetic measurements without the cryogenic technology required for SQUIDs.
Collapse
Affiliation(s)
- Bradley J Roth
- Department of Physics, Oakland University, Rochester, MI 48309, USA
| |
Collapse
|
3
|
Corina DP, Coffey-Corina S, Pierotti E, Bormann B, LaMarr T, Lawyer L, Backer KC, Miller LM. Electrophysiological Examination of Ambient Speech Processing in Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3502-3517. [PMID: 36037517 PMCID: PMC9913291 DOI: 10.1044/2022_jslhr-22-00004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/05/2022] [Accepted: 06/11/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE This research examined the expression of cortical auditory evoked potentials in a cohort of children who received cochlear implants (CIs) for treatment of congenital deafness (n = 28) and typically hearing controls (n = 28). METHOD We make use of a novel electroencephalography paradigm that permits the assessment of auditory responses to ambiently presented speech and evaluates the contributions of concurrent visual stimulation on this activity. RESULTS Our findings show group differences in the expression of auditory sensory and perceptual event-related potential components occurring in 80- to 200-ms and 200- to 300-ms time windows, with reductions in amplitude and a greater latency difference for CI-using children. Relative to typically hearing children, current source density analysis showed muted responses to concurrent visual stimulation in CI-using children, suggesting less cortical specialization and/or reduced responsiveness to auditory information that limits the detection of the interaction between sensory systems. CONCLUSION These findings indicate that even in the face of early interventions, CI-using children may exhibit disruptions in the development of auditory and multisensory processing.
Collapse
Affiliation(s)
- David P. Corina
- Department of Linguistics, University of California, Davis
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Elizabeth Pierotti
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Brett Bormann
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
| | - Todd LaMarr
- Center for Mind and Brain, University of California, Davis
| | - Laurel Lawyer
- Center for Mind and Brain, University of California, Davis
| | | | - Lee M. Miller
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Otolaryngology/Head and Neck Surgery, University of California, Davis
| |
Collapse
|
4
|
Bayasgalan B, Matsuhashi M, Fumuro T, Nakano N, Katagiri M, Shimotake A, Kikuchi T, Iida K, Kunieda T, Kato A, Takahashi R, Ikeda A, Inui K. Neural Sources of Vagus Nerve Stimulation–Induced Slow Cortical Potentials. Neuromodulation 2022; 25:407-413. [DOI: 10.1016/j.neurom.2022.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 12/15/2021] [Accepted: 12/22/2021] [Indexed: 11/16/2022]
|
5
|
Shirakura M, Kawase T, Kanno A, Ohta J, Nakasato N, Kawashima R, Katori Y. Different contra-sound effects between noise and music stimuli seen in N1m and psychophysical responses. PLoS One 2021; 16:e0261637. [PMID: 34928999 PMCID: PMC8687558 DOI: 10.1371/journal.pone.0261637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 12/06/2021] [Indexed: 11/26/2022] Open
Abstract
Auditory-evoked responses can be affected by the sound presented to the contralateral ear. The different contra-sound effects between noise and music stimuli on N1m responses of auditory-evoked fields and those on psychophysical response were examined in 12 and 15 subjects, respectively. In the magnetoencephalographic study, the stimulus to elicit the N1m response was a tone burst of 500 ms duration at a frequency of 250 Hz, presented at a level of 70 dB, and white noise filtered with high-pass filter at 2000 Hz and music stimuli filtered with high-pass filter at 2000 Hz were used as contralateral noise. The contralateral stimuli (noise or music) were presented in 10 dB steps from 80 dB to 30 dB. Subjects were instructed to focus their attention to the left ear and to press the response button each time they heard burst stimuli presented to the left ear. In the psychophysical study, the effects of contralateral sound presentation on the response time for detection of the probe sound of a 250 Hz tone burst presented at a level of 70 dB were examined for the same contra-noise and contra-music used in the magnetoencephalographic study. The amplitude reduction and latency delay of N1m caused by contra-music stimuli were significantly larger than those by contra-noise stimuli in bilateral hemisphere, even for low level of contra-music near the psychophysical threshold. Moreover, this larger suppressive effect induced by contra-music effects was also observed psychophysically; i.e., the change in response time for detection of the probe sound was significantly longer by adding contralateral music stimuli than by adding contra-noise stimuli. Regarding differences in effect between contra-music and contra-noise, differences in the degree of saliency may be responsible for their different abilities to disturb auditory attention to the probe sound, but further investigation is required to confirm this hypothesis.
Collapse
Affiliation(s)
- Masayuki Shirakura
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, Sendai, Miyagi, Japan
- Department of Audiology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- * E-mail:
| | - Akitake Kanno
- Department of Electromagnetic Neurophysiology, Tohoku University School of Medicine, Sendai, Miyagi, Japan
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Jun Ohta
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Electromagnetic Neurophysiology, Tohoku University School of Medicine, Sendai, Miyagi, Japan
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| |
Collapse
|
6
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
7
|
Nomura Y, Kawase T, Kanno A, Nakasato N, Kawashima R, Katori Y. N100m latency shortening caused by selective attention. Brain Res 2020; 1751:147177. [PMID: 33121923 DOI: 10.1016/j.brainres.2020.147177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 10/23/2022]
Abstract
The N100m response to a specific same-sound stimulus may be altered by the degree of attention paid to the stimulus. When participants selectively pay attention to the stimulus, the N100m amplitude increases; however, minimal effects are observed on the N100m latency. In this study, we examined the effects of selective special attention (motivation) to extract the frequency (or pitch) information from a probe tone on the N100m response to the probe tone. We compared the N100m latencies and amplitudes using magnetoencephalography, with the following three experimental conditions: 1) vocalization task protocol (vocalize in tune with the pitch of the probe tone after the presentation of the probe tone), 2) hearing task protocol (just listen to the probe tone), and 3) imagining (just imagine the vocalization in tune with the probe tone). The results indicated that the N100m latency in response to the probe tone was significantly shortened in the vocalization and imagining tasks compared with the hearing task in the right hemisphere of the brain. The amplitude was significantly increased in the vocalization task compared with the imagining and hearing tasks in the right hemisphere, and in the vocalization task compared with the hearing task in the left hemisphere of the brain; that is, the attention and/or motivation required to extract the information from the stimulus tones may have caused N100m latency shortening. To our knowledge, this study is the first to demonstrate that the N100m latency may be shortened under particular attentional conditions in response to a simple tone.
Collapse
Affiliation(s)
- Yuri Nomura
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan.
| | - Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan; Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, Sendai, Miyagi, Japan
| | - Akitake Kanno
- Department of Electromagnetic Neurophysiology, Tohoku University School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Electromagnetic Neurophysiology, Tohoku University School of Medicine, Sendai, Miyagi, Japan; Department of Epileptology, Tohoku University School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| |
Collapse
|
8
|
Hagiwara K, Ogata K, Hironaga N, Tobimatsu S. Secondary somatosensory area is involved in vibrotactile temporal-structure processing: MEG analysis of slow cortical potential shifts in humans. Somatosens Mot Res 2020; 37:222-232. [PMID: 32597279 DOI: 10.1080/08990220.2020.1784127] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Purpose: Temporal-structure discrimination is an essential dimension of tactile processing. Exploring object surface by touch generates vibrotactile input with various temporal dynamics, which gives diversity to tactile percepts. Here, we examined whether slow cortical potential shifts (SCPs) (<1 Hz) evoked by long vibrotactile stimuli can reflect active temporal-structure processing.Materials and methods: Vibrotactile-evoked magnetic brain responses were recorded in 10 right-handed healthy volunteers using a piezoelectric-based stimulator and whole-head magnetoencephalography. A series of vibrotactile train stimuli with various temporal structures were delivered to the right index finger. While all trains consisted of identical number (15) of stimuli delivered within a fixed duration (1500 ms), temporal structures were varied by modulating inter-stimulus intervals (ISIs). Participants judged regularity/irregularity of ISI for each train in the active condition, whereas they ignored the stimuli while performing a visual distraction task in the passive condition. We analysed the spatiotemporal features of SCPs and their behaviour using the minimum norm estimates with the dynamic statistical parametric mapping.Results: SCPs were localized to contralateral primary somatosensory area (S1), contralateral superior temporal gyrus, and contralateral as well as ipsilateral secondary somatosensory areas (S2). A significant enhancement of SCPs was observed in the ipsilateral S2 (S2i) in the active condition, whereas such effects were absent in the other regions. We also found a significant larger amplitude difference between the regular- and irregular-stimulus evoked S2i responses during the active condition than during the passive condition.Conclusions: This study suggests that S2 subserves the temporal dimension of vibrotactile processing.
Collapse
Affiliation(s)
- Koichi Hagiwara
- Department of Clinical Neurophysiology, Faculty of Medicine, Neurological Institute, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Katsuya Ogata
- Department of Clinical Neurophysiology, Faculty of Medicine, Neurological Institute, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Naruhito Hironaga
- Department of Clinical Neurophysiology, Faculty of Medicine, Neurological Institute, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Shozo Tobimatsu
- Department of Clinical Neurophysiology, Faculty of Medicine, Neurological Institute, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
9
|
Zhang R, Xiao W, Ding Y, Feng Y, Peng X, Shen L, Sun C, Wu T, Wu Y, Yang Y, Zheng Z, Zhang X, Chen J, Guo H. Recording brain activities in unshielded Earth's field with optically pumped atomic magnetometers. SCIENCE ADVANCES 2020; 6:eaba8792. [PMID: 32582858 PMCID: PMC7292643 DOI: 10.1126/sciadv.aba8792] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 05/01/2020] [Indexed: 05/23/2023]
Abstract
Understanding the relationship between brain activity and specific mental function is important for medical diagnosis of brain symptoms, such as epilepsy. Magnetoencephalography (MEG), which uses an array of high-sensitivity magnetometers to record magnetic field signals generated from neural currents occurring naturally in the brain, is a noninvasive method for locating the brain activities. The MEG is normally performed in a magnetically shielded room. Here, we introduce an unshielded MEG system based on optically pumped atomic magnetometers. We build an atomic magnetic gradiometer, together with feedback methods, to reduce the environment magnetic field noise. We successfully observe the alpha rhythm signals related to closed eyes and clear auditory evoked field signals in unshielded Earth's field. Combined with improvements in the miniaturization of the atomic magnetometer, our method is promising to realize a practical wearable and movable unshielded MEG system and bring new insights into medical diagnosis of brain symptoms.
Collapse
Affiliation(s)
- Rui Zhang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
- College of Liberal Arts and Sciences, and Interdisciplinary Center for Quantum Information, National University of Defense Technology, Changsha, Hunan 410073, China
| | - Wei Xiao
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Yudong Ding
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Yulong Feng
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Xiang Peng
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Liang Shen
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Chenxi Sun
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Teng Wu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Yulong Wu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Yucheng Yang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Zhaoyu Zheng
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Xiangzhi Zhang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Jingbiao Chen
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| | - Hong Guo
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Department of Electronics, and Center for Quantum Information Technology, Peking University, Beijing 100871, China
| |
Collapse
|
10
|
Andermann M, Patterson RD, Rupp A. Transient and sustained processing of musical consonance in auditory cortex and the effect of musicality. J Neurophysiol 2020; 123:1320-1331. [DOI: 10.1152/jn.00876.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In recent years, electroencephalography and magnetoencephalography (MEG) have both been used to investigate the response in human auditory cortex to musical sounds that are perceived as consonant or dissonant. These studies have typically focused on the transient components of the physiological activity at sound onset, specifically, the N1 wave of the auditory evoked potential and the auditory evoked field, respectively. Unfortunately, the morphology of the N1 wave is confounded by the prominent neural response to energy onset at stimulus onset. It is also the case that the perception of pitch is not limited to sound onset; the perception lasts as long as the note producing it. This suggests that consonance studies should also consider the sustained activity that appears after the transient components die away. The current MEG study shows how energy-balanced sounds can focus the response waves on the consonance-dissonance distinction rather than energy changes and how source modeling techniques can be used to measure the sustained field associated with extended consonant and dissonant sounds. The study shows that musical dyads evoke distinct transient and sustained neuromagnetic responses in auditory cortex. The form of the response depends on both whether the dyads are consonant or dissonant and whether the listeners are musical or nonmusical. The results also show that auditory cortex requires more time for the early transient processing of dissonant dyads than it does for consonant dyads and that the continuous representation of temporal regularity in auditory cortex might be modulated by processes beyond auditory cortex. NEW & NOTEWORTHY We report a magnetoencephalography (MEG) study on transient and sustained cortical consonance processing. Stimuli were long-duration, energy-balanced, musical dyads that were either consonant or dissonant. Spatiotemporal source analysis revealed specific transient and sustained neuromagnetic activity in response to the dyads; in particular, the morphology of the responses was shaped by the dyad’s consonance and the listener’s musicality. Our results also suggest that the sustained representation of stimulus regularity might be modulated by processes beyond auditory cortex.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Roy D. Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
11
|
Edgar JC. Identifying electrophysiological markers of autism spectrum disorder and schizophrenia against a backdrop of normal brain development. Psychiatry Clin Neurosci 2020; 74:1-11. [PMID: 31472015 PMCID: PMC10150852 DOI: 10.1111/pcn.12927] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 08/26/2019] [Accepted: 08/27/2019] [Indexed: 01/25/2023]
Abstract
An examination of electroencephalographic and magnetoencephalographic studies demonstrates how age-related changes in brain neural function temporally constrain their use as diagnostic markers. A first example shows that, given maturational changes in the resting-state peak alpha frequency in typically developing children but not in children who have autism spectrum disorder (ASD), group differences in alpha-band activity characterize only a subset of children who have ASD. A second example, auditory encoding processes in schizophrenia, shows that the complication of normal age-related brain changes on detecting and interpreting group differences in neural activity is not specific to children. MRI studies reporting group differences in the rate of brain maturation demonstrate that a group difference in brain maturation may be a concern for all diagnostic brain markers. Attention to brain maturation is needed whether one takes a DSM-5 or a Research Domain Criteria approach to research. For example, although there is interest in cross-diagnostic studies comparing brain measures in ASD and schizophrenia, such studies are difficult given that measures are obtained in one group well after and in the other much closer to the onset of symptoms. In addition, given differences in brain activity among infants, toddlers, children, adolescents, and younger and older adults, creating tasks and research designs that produce interpretable findings across the life span and yet allow for development is difficult at best. To conclude, brain imaging findings show an effect of brain maturation on diagnostic markers separate from (and potentially difficult to distinguish from) effects of disease processes. Available research with large samples already provides direction about the age range(s) when diagnostic markers are most robust and informative.
Collapse
Affiliation(s)
- J Christopher Edgar
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, USA
| |
Collapse
|
12
|
Manca AD, Di Russo F, Sigona F, Grimaldi M. Electrophysiological evidence of phonemotopic representations of vowels in the primary and secondary auditory cortex. Cortex 2019; 121:385-398. [PMID: 31678684 DOI: 10.1016/j.cortex.2019.09.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 05/18/2019] [Accepted: 09/20/2019] [Indexed: 11/25/2022]
Abstract
How the brain encodes the speech acoustic signal into phonological representations is a fundamental question for the neurobiology of language. Determining whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic studies failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetrical indexes in primary and secondary auditory areas structuring vowel representations. Importantly, the N1 was characterized by early and late phases. The early N1 peaked at 125-135 msec and was localized in the primary auditory cortex; the late N1 peaked at 145-155 msec and was localized in the left superior temporal gyrus. We showed that early in the primary auditory cortex, the cortical spatial arrangements-along the lateral-medial and anterior-posterior gradients-are broadly warped by phonemotopic patterns according to the distinctive feature principle. These phonemotopic patterns are carefully refined in the superior temporal gyrus along the inferior-superior and anterior-posterior gradients. The dynamical and hierarchical interface between primary and secondary auditory areas and the interaction effects between Height and Place features generate the categorical representation of the Salento Italian vowels.
Collapse
Affiliation(s)
- Anna Dora Manca
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy
| | - Francesco Di Russo
- Dipartimento di Scienze Motorie, Umane e della Salute, University of Rome "Foro Italico", Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Francesco Sigona
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy
| | - Mirko Grimaldi
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy.
| |
Collapse
|
13
|
Kühler R, Weichenberger M, Bauer M, Hensel J, Brühl R, Ihlenfeld A, Ittermann B, Sander T, Kühn S, Koch C. Does airborne ultrasound lead to activation of the auditory cortex? ACTA ACUST UNITED AC 2019; 64:481-493. [PMID: 30657739 DOI: 10.1515/bmt-2018-0048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 09/11/2018] [Indexed: 11/15/2022]
Abstract
As airborne ultrasound can be found in many technical applications and everyday situations, the question as to whether sounds at these frequencies can be heard by human beings or whether they present a risk to their hearing system is of great practical relevance. To objectively study these issues, the monaural hearing threshold in the frequency range from 14 to 24 kHz was determined for 26 test subjects between 19 and 33 years of age using pure tone audiometry. The hearing threshold values increased strongly with increasing frequency up to around 21 kHz, followed by a range with a smaller slope toward 24 kHz. The number of subjects who could respond positively to the threshold measurements decreased dramatically above 21 kHz. Brain activation was then measured by means of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) and with acoustic stimuli at the same frequencies, with sound pressure levels (SPLs) above and below the individual threshold. No auditory cortex activation was found for levels below the threshold. Although test subjects reported audible sounds above the threshold, no brain activity was identified in the above-threshold case under current experimental conditions except at the highest sensation level, which was presented at the lowest test frequency.
Collapse
Affiliation(s)
- Robert Kühler
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Markus Weichenberger
- Max Planck Institute for Human Development, Center for Lifespan Psychology, Lentzeallee 94, Berlin 14195, Germany
| | - Martin Bauer
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Johannes Hensel
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Rüdiger Brühl
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Albrecht Ihlenfeld
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Bernd Ittermann
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Tilmann Sander
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| | - Simone Kühn
- University Clinic Hamburg-Eppendorf, Clinic for Psychiatry and Psychotherapy, Martinistraße 52, Hamburg 20246, Germany
| | - Christian Koch
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Bundesallee 100, Braunschweig 38116, Germany
| |
Collapse
|
14
|
Backer KC, Kessler AS, Lawyer LA, Corina DP, Miller LM. A novel EEG paradigm to simultaneously and rapidly assess the functioning of auditory and visual pathways. J Neurophysiol 2019; 122:1312-1329. [PMID: 31268796 DOI: 10.1152/jn.00868.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.
Collapse
Affiliation(s)
- Kristina C Backer
- Center for Mind and Brain, University of California, Davis, California.,Department of Cognitive and Information Sciences, University of California, Merced, California
| | - Andrew S Kessler
- Center for Mind and Brain, University of California, Davis, California
| | - Laurel A Lawyer
- Center for Mind and Brain, University of California, Davis, California
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, California.,Deptartment of Linguistics, University of California, Davis, California
| | - Lee M Miller
- Center for Mind and Brain, University of California, Davis, California.,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
15
|
Mathias B, Gehring WJ, Palmer C. Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance. Brain Sci 2019; 9:E25. [PMID: 30696038 PMCID: PMC6406892 DOI: 10.3390/brainsci9020025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 01/21/2019] [Accepted: 01/26/2019] [Indexed: 12/20/2022] Open
Abstract
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by manipulating auditory feedback during music performance. Pianists performed isochronous melodies from memory at an initially cued rate while their electroencephalogram was recorded. Pitch feedback was occasionally altered to match either an immediately upcoming Near-Future pitch (next sequence event) or a more distant Far-Future pitch (two events ahead of the current event). Near-Future, but not Far-Future altered feedback perturbed the timing of pianists' performances, suggesting greater interference of Near-Future sequential events with current planning processes. Near-Future feedback triggered a greater reduction in auditory sensory suppression (enhanced response) than Far-Future feedback, reflected in the P2 component elicited by the pitch event following the unexpected pitch change. Greater timing perturbations were associated with enhanced cortical sensory processing of the pitch event following the Near-Future altered feedback. Both types of feedback alterations elicited feedback-related negativity (FRN) and P3a potentials and amplified spectral power in the theta frequency range. These findings suggest similar constraints on producers' sequential planning to those reported in speech production.
Collapse
Affiliation(s)
- Brian Mathias
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
- Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| | - William J Gehring
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Caroline Palmer
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
| |
Collapse
|
16
|
Saga N, Yano H, Takiguchi T, Soeta Y, Nakagawa S. Spatiotemporal Characteristics of Cortical Activities Associated with Articulation of Speech Perception. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:1066-1069. [PMID: 30440575 DOI: 10.1109/embc.2018.8512500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recently, brain computer interface (BCI) technologies that control external devices with human brain signals have been developed. However, most of the BCI systems, such as P300-speller, can only discriminate among options that have been given in advance. Therefore, the ability to decode the state of a person's perception and recognition, as well as that person's fundamental intention and emotions, from cortical activity is needed to develop a more general-use BCI system. In this study, two experiments were conducted. First, articulations were measured for Japanese monosyllabic utterances masked by several levels of noise. Second, auditory brain magnetic fields evoked by the monosyllable stimuli used in the first experiment were recorded, and neuronal current sources were localized in regions associated with speech perception and recognition - the auditory cortex (BA41), the Wernicke's area (posterior part of BA22), Broca's area (BA22), motor (BA4), and premotor (BA6) areas. Although the source intensity did not systematically change with SNR, the peak latency changed along SNR in the posterior superior temporal gyrus in the right hemisphere. The results suggest that the information associated with articulation is processed in this area.
Collapse
|
17
|
Edgar JC, Fisk CL, Chen YH, Stone-Howell B, Liu S, Hunter MA, Huang M, Bustillo J, Cañive JM, Miller GA. Identifying auditory cortex encoding abnormalities in schizophrenia: The utility of low-frequency versus 40 Hz steady-state measures. Psychophysiology 2018; 55:e13074. [PMID: 29570815 DOI: 10.1111/psyp.13074] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 02/15/2018] [Accepted: 02/15/2018] [Indexed: 11/28/2022]
Abstract
Magnetoencephalography (MEG) and EEG have identified poststimulus low frequency and 40 Hz steady-state auditory encoding abnormalities in schizophrenia (SZ). Negative findings have also appeared. To identify factors contributing to these inconsistencies, healthy control (HC) and SZ group differences were examined in MEG and EEG source space and EEG sensor space, with better group differentiation hypothesized for source than sensor measures given greater predictive utility for source measures. Fifty-five HC and 41 chronic SZ were presented 500 Hz sinusoidal stimuli modulated at 40 Hz during simultaneous whole-head MEG and EEG. MEG and EEG source models using left and right superior temporal gyrus (STG) dipoles estimated trial-to-trial phase similarity and percent change from prestimulus baseline. Group differences in poststimulus low-frequency activity and 40 Hz steady-state response were evaluated. Several EEG sensor analysis strategies were also examined. Poststimulus low-frequency group differences were observed across all methods. Given an age-related decrease in left STG 40 Hz steady-state activity in HC (HC > SZ), 40 Hz steady-state group differences were evident only in younger participants' source measures. Findings thus indicated that optimal data collection and analysis methods depend on the auditory encoding measure of interest. In addition, whereas results indicated that HC and SZ auditory encoding low-frequency group differences are generally comparable across modality and analysis strategy (and thus not dependent on obtaining construct-valid measures of left and right auditory cortex activity), 40 Hz steady-state group-difference findings are much more dependent on analysis strategy, with 40 Hz steady-state source-space findings providing the best group differentiation.
Collapse
Affiliation(s)
- J C Edgar
- The Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Charles L Fisk
- The Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yu-Han Chen
- The Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Breannan Stone-Howell
- Department of Psychiatry, The University of New Mexico School of Medicine, Center for Psychiatric Research, Albuquerque, New Mexico, USA.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico, USA
| | - Song Liu
- The Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Michael A Hunter
- Department of Psychiatry, The University of New Mexico School of Medicine, Center for Psychiatric Research, Albuquerque, New Mexico, USA.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico, USA
| | - Mingxiong Huang
- Department of Radiology, University of California, San Diego, San Diego, California, USA.,Department of Radiology, San Diego VA Healthcare System, San Diego, California, USA
| | - Juan Bustillo
- Department of Psychiatry, The University of New Mexico School of Medicine, Center for Psychiatric Research, Albuquerque, New Mexico, USA
| | - José M Cañive
- Department of Psychiatry, The University of New Mexico School of Medicine, Center for Psychiatric Research, Albuquerque, New Mexico, USA.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico, USA
| | - Gregory A Miller
- Department of Psychology and Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, California, USA
| |
Collapse
|
18
|
Beck AK, Lütjens G, Schwabe K, Dengler R, Krauss JK, Sandmann P. Thalamic and basal ganglia regions are involved in attentional processing of behaviorally significant events: evidence from simultaneous depth and scalp EEG. Brain Struct Funct 2017; 223:461-474. [PMID: 28871419 DOI: 10.1007/s00429-017-1506-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 08/22/2017] [Indexed: 10/18/2022]
Abstract
Extensive descriptions exist on cortical responses to change in the acoustic environment. However, the involvement of subcortical regions is not well understood. Here we present simultaneous recordings of cortical and subcortical event-related potentials (ERPs) to different pure tones in patients undergoing surgery for deep brain stimulation (DBS). These patients had externalized electrodes in the subthalamic nucleus (STN), the ventrolateral posterior thalamus (VLp) or the globus pallidus internus (GPi). Subcortical and cortical ERPs were analyzed upon presentation of one frequent non-target stimulus and two infrequent stimuli, either being a target or a distractor stimulus. The results revealed that amplitudes of scalp-recorded P3 and subcortical late attention-modulated responses (AMR) were largest upon presentation of target stimuli compared with distractor stimuli. This suggests that thalamic and basal ganglia regions are sensitive to behaviorally relevant auditory events. Comparison of the subcortical structures showed that responses in VLp have shorter latency than in GPi and STN. Further, the subcortical responses in VLp and STN emerged significantly prior to the cortical P3 response. Our findings point to higher-order cognitive functions already at a subcortical level. Auditory events are categorized as behaviorally relevant in subcortical loops involving basal ganglia and thalamic regions. This label is then distributed to cortical regions by ascending projections.
Collapse
Affiliation(s)
- Anne-Kathrin Beck
- Department of Neurosurgery, Hannover Medical School, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany. .,Cluster of Excellence "Hearing4all", Hannover, Germany.
| | - Götz Lütjens
- Department of Neurosurgery, Hannover Medical School, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - Kerstin Schwabe
- Department of Neurosurgery, Hannover Medical School, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany.,Cluster of Excellence "Hearing4all", Hannover, Germany
| | - Reinhard Dengler
- Department of Neurology, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany.,Cluster of Excellence "Hearing4all", Hannover, Germany
| | - Joachim K Krauss
- Department of Neurosurgery, Hannover Medical School, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany.,Cluster of Excellence "Hearing4all", Hannover, Germany
| | - Pascale Sandmann
- Department of Neurology, Medical University Hannover, Carl-Neuberg-Str. 1, 30625, Hannover, Germany.,Department of Otorhinolaryngology, University of Cologne, Kerpener Str. 62, 50937, Cologne, Germany.,Cluster of Excellence "Hearing4all", Hannover, Germany
| |
Collapse
|
19
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
20
|
Edgar JC, Fisk CL, Chen YH, Stone-Howell B, Hunter MA, Huang M, Bustillo JR, Cañive JM, Miller GA. By our bootstraps: Comparing methods for measuring auditory 40 Hz steady-state neural activity. Psychophysiology 2017; 54:1110-1127. [PMID: 28421620 DOI: 10.1111/psyp.12876] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 03/10/2017] [Accepted: 03/16/2017] [Indexed: 11/29/2022]
Abstract
Although the 40 Hz auditory steady-state response (ASSR) is of clinical interest, the construct validity of EEG and MEG measures of 40 Hz ASSR cortical microcircuits is unclear. This study evaluated several MEG and EEG metrics by leveraging findings of (a) an association between the 40 Hz ASSR and age in the left but not right hemisphere, and (b) right- > left-hemisphere differences in the strength of the 40 Hz ASSR. The contention is that, if an analysis method does not demonstrate a left 40 Hz ASSR and age relationship or hemisphere differences, then the obtained measures likely have low validity. Fifty-three adults were presented 500 Hz stimuli modulated at 40 Hz while MEG and EEG were collected. ASSR activity was examined as a function of phase similarity (intertrial coherence) and percent change from baseline (total power). A variety of head models (spherical and realistic) and a variety of dipole source modeling strategies (dipole source localization and dipoles fixed to Heschl's gyri) were compared. Several sensor analysis strategies were also tested. EEG sensor measures failed to detect left 40 Hz ASSR and age associations or hemisphere differences. A comparison of MEG and EEG head-source models showed similarity in the 40 Hz ASSR measures and in estimating age and left 40 Hz ASSR associations, indicating good construct validity across models. Given a goal of measuring the 40 Hz ASSR cortical microcircuits, a source-modeling approach was shown to be superior in measuring this construct versus methods that rely on EEG sensor measures.
Collapse
Affiliation(s)
- J Christopher Edgar
- Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania
| | - Charles L Fisk
- Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yu-Han Chen
- Children's Hospital of Philadelphia and University of Pennsylvania, Philadelphia, Pennsylvania
| | - Breannan Stone-Howell
- University of New Mexico School of Medicine, Department of Psychiatry, Center for Psychiatric Research, Albuquerque, New Mexico.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico
| | - Michael A Hunter
- University of New Mexico School of Medicine, Department of Psychiatry, Center for Psychiatric Research, Albuquerque, New Mexico.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico
| | - Mingxiong Huang
- University of California, San Diego, Department of Radiology, San Diego, California.,San Diego VA Healthcare System, Department of Radiology, San Diego, California
| | - Juan R Bustillo
- University of New Mexico School of Medicine, Department of Psychiatry, Center for Psychiatric Research, Albuquerque, New Mexico
| | - José M Cañive
- University of New Mexico School of Medicine, Department of Psychiatry, Center for Psychiatric Research, Albuquerque, New Mexico.,New Mexico Raymond G. Murphy VA Healthcare System, Psychiatry Research, Albuquerque, New Mexico
| | - Gregory A Miller
- University of California, Los Angeles, Department of Psychology and Department of Psychiatry and Biobehavioral Sciences, Los Angeles, California
| |
Collapse
|
21
|
Boubenec Y, Lawlor J, Górska U, Shamma S, Englitz B. Detecting changes in dynamic and complex acoustic environments. eLife 2017; 6. [PMID: 28262095 PMCID: PMC5367897 DOI: 10.7554/elife.24910] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 03/04/2017] [Indexed: 01/28/2023] Open
Abstract
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI:http://dx.doi.org/10.7554/eLife.24910.001
Collapse
Affiliation(s)
- Yves Boubenec
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Jennifer Lawlor
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Urszula Górska
- Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands.,Psychophysiology Laboratory, Institute of Psychology, Jagiellonian University, Krakow, Poland.,Smoluchowski Institute of Physics, Jagiellonian University, Krakow, Poland
| | - Shihab Shamma
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Electrical and Computer Engineering, University of Maryland, College Park, United States.,Institute for Systems Research, University of Maryland, College Park, United States
| | - Bernhard Englitz
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands
| |
Collapse
|
22
|
Schierholz I, Finke M, Kral A, Büchner A, Rach S, Lenarz T, Dengler R, Sandmann P. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study. Hum Brain Mapp 2017; 38:2206-2225. [PMID: 28130910 DOI: 10.1002/hbm.23515] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 12/26/2016] [Accepted: 01/03/2017] [Indexed: 11/10/2022] Open
Abstract
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Irina Schierholz
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Mareike Finke
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andrej Kral
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany.,Institute of AudioNeuroTechnology and Department of Experimental Otology, Hannover Medical School, Hannover, Germany.,School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas
| | - Andreas Büchner
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Stefan Rach
- Department of Epidemiological Methods and Etiological Research, Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
| | - Thomas Lenarz
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Reinhard Dengler
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany
| | - Pascale Sandmann
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otorhinolaryngology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
23
|
Kawase T, Yahata I, Kanno A, Sakamoto S, Takanashi Y, Takata S, Nakasato N, Kawashima R, Katori Y. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study. PLoS One 2016; 11:e0168740. [PMID: 28030631 PMCID: PMC5193434 DOI: 10.1371/journal.pone.0168740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/05/2016] [Indexed: 11/18/2022] Open
Abstract
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage.
Collapse
Affiliation(s)
- Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, Sendai, Miyagi, Japan
- Department of Audiology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- * E-mail:
| | - Izumi Yahata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Akitake Kanno
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Shuichi Sakamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan
| | - Yoshitaka Takanashi
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Shiho Takata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| |
Collapse
|
24
|
Haigh SM, Coffman BA, Murphy TK, Butera CD, Salisbury DF. Abnormal auditory pattern perception in schizophrenia. Schizophr Res 2016; 176:473-479. [PMID: 27502427 PMCID: PMC5026944 DOI: 10.1016/j.schres.2016.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Revised: 07/07/2016] [Accepted: 07/11/2016] [Indexed: 11/19/2022]
Abstract
Mismatch negativity (MMN) in response to deviation from physical sound parameters (e.g., pitch, duration) is reduced in individuals with long-term schizophrenia (Sz), suggesting deficits in deviance detection. However, MMN can appear at several time intervals as part of deviance detection. Understanding which part of the processing stream is abnormal in Sz is crucial for understanding MMN pathophysiology. We measured MMN to complex pattern deviants, which have been shown to produce multiple MMNs in healthy controls (HC). Both simple and complex MMNs were recorded from 27 Sz and 27 matched HC. For simple MMN, pitch- and duration-deviants were presented among frequent standard tones. For complex MMN, patterns of five single tones were repeatedly presented, with the occasional deviant group of tones containing an extra sixth tone. Sz showed smaller pitch MMN (p=0.009, ~110ms) and duration MMN (p=0.030, ~170ms) than healthy controls. For complex MMN, there were two deviance-related negativities. The first (~150ms) was not significantly different between HC and SZ. The second was significantly reduced in Sz (p=0.011, ~400ms). The topography of the late complex MMN was consistent with generators in anterior temporal cortex. Worse late MMN in Sz was associated with increased emotional withdrawal, poor attention, lack of spontaneity/conversation, and increased preoccupation. Late MMN blunting in schizophrenia suggests a deficit in later stages of deviance processing. Correlations with negative symptoms measures are preliminary, but suggest that abnormal complex auditory perceptual processes may compound higher-order cognitive and social deficits in the disorder.
Collapse
Affiliation(s)
- Sarah M Haigh
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3501 Forbes Avenue, Pittsburgh, PA 15213, United States.
| | - Brian A Coffman
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3501 Forbes Avenue, Pittsburgh, PA 15213, United States
| | - Timothy K Murphy
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3501 Forbes Avenue, Pittsburgh, PA 15213, United States
| | - Christiana D Butera
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3501 Forbes Avenue, Pittsburgh, PA 15213, United States
| | - Dean F Salisbury
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3501 Forbes Avenue, Pittsburgh, PA 15213, United States
| |
Collapse
|
25
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
26
|
Abstract
We assessed neural sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure (TFS) of low-frequency sounds and ITDs conveyed in the temporal envelope of amplitude-modulated (AM'ed) high-frequency sounds. Using electroencephalography (EEG), we recorded brain activity to sounds in which the interaural phase difference (IPD) of the TFS (or the modulated temporal envelope) was repeatedly switched between leading in one ear or the other. When the amplitude of the tones is modulated equally in the two ears at 41 Hz, the interaural phase modulation (IPM) evokes an IPM following-response (IPM-FR) in the EEG signal. For low-frequency signals, IPM-FRs were reliably obtained, and largest for an IPM rate of 6.8 Hz and when IPD switches (around 0°) were in the range 45-90°. IPDs conveyed in envelope of high-frequency tones also generated IPM-FRs; response maxima occurred for IPDs switched between 0° and 180° IPD. This is consistent with the interpretation that distinct binaural mechanisms generate the IPM-FR at low and high frequencies, and with the reported physiological responses of medial superior olive (MSO) and lateral superior olive (LSO) neurons in other mammals. Low-frequency binaural neurons in the MSO are considered maximally activated by IPDs in the range 45-90°, consistent with their reception of excitatory inputs from both ears. High-frequency neurons in the LSO receive excitatory and inhibitory input from the two ears receptively--as such maximum activity occurs when the sounds at the two ears are presented out of phase.
Collapse
|
27
|
Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control. Sci Rep 2016; 6:28909. [PMID: 27356768 PMCID: PMC4928116 DOI: 10.1038/srep28909] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2015] [Accepted: 06/13/2016] [Indexed: 11/16/2022] Open
Abstract
Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls.
Collapse
|
28
|
Port RG, Edgar JC, Ku M, Bloy L, Murray R, Blaskey L, Levy SE, Roberts TPL. Maturation of auditory neural processes in autism spectrum disorder - A longitudinal MEG study. Neuroimage Clin 2016; 11:566-577. [PMID: 27158589 PMCID: PMC4844592 DOI: 10.1016/j.nicl.2016.03.021] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Revised: 03/20/2016] [Accepted: 03/29/2016] [Indexed: 11/23/2022]
Abstract
BACKGROUND Individuals with autism spectrum disorder (ASD) show atypical brain activity, perhaps due to delayed maturation. Previous studies examining the maturation of auditory electrophysiological activity have been limited due to their use of cross-sectional designs. The present study took a first step in examining magnetoencephalography (MEG) evidence of abnormal auditory response maturation in ASD via the use of a longitudinal design. METHODS Initially recruited for a previous study, 27 children with ASD and nine typically developing (TD) children, aged 6- to 11-years-old, were re-recruited two to five years later. At both timepoints, MEG data were obtained while participants passively listened to sinusoidal pure-tones. Bilateral primary/secondary auditory cortex time domain (100 ms evoked response latency (M100)) and spectrotemporal measures (gamma-band power and inter-trial coherence (ITC)) were examined. MEG measures were also qualitatively examined for five children who exhibited "optimal outcome", participants who were initially on spectrum, but no longer met diagnostic criteria at follow-up. RESULTS M100 latencies were delayed in ASD versus TD at the initial exam (~ 19 ms) and at follow-up (~ 18 ms). At both exams, M100 latencies were associated with clinical ASD severity. In addition, gamma-band evoked power and ITC were reduced in ASD versus TD. M100 latency and gamma-band maturation rates did not differ between ASD and TD. Of note, the cohort of five children that demonstrated "optimal outcome" additionally exhibited M100 latency and gamma-band activity mean values in-between TD and ASD at both timepoints. Though justifying only qualitative interpretation, these "optimal outcome" related data are presented here to motivate future studies. CONCLUSIONS Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities in ASD persisted across time. Of note, data from the five children whom demonstrated "optimal outcome" qualitatively suggest that such clinical improvements may be associated with auditory brain responses intermediate between TD and ASD. These "optimal outcome" related results are not statistically significant though, likely due to the low sample size of this cohort, and to be expected as a result of the relatively low proportion of "optimal outcome" in the ASD population. Thus, further investigations with larger cohorts are needed to determine if the above auditory response phenotypes have prognostic utility, predictive of clinical outcome.
Collapse
Affiliation(s)
- Russell G Port
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - J Christopher Edgar
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Matthew Ku
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Luke Bloy
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Rebecca Murray
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Lisa Blaskey
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Susan E Levy
- Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Timothy P L Roberts
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
| |
Collapse
|
29
|
Dale CL, Brown EG, Fisher M, Herman AB, Dowling AF, Hinkley LB, Subramaniam K, Nagarajan SS, Vinogradov S. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia. Schizophr Bull 2016; 42:220-8. [PMID: 26152668 PMCID: PMC4681549 DOI: 10.1093/schbul/sbv087] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory.
Collapse
Affiliation(s)
- Corby L. Dale
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA;,Northern California Institute for Research and Education (NCIRE), San Francisco Veterans’ Affairs Medical Center, San Francisco, CA;,*To whom correspondence should be addressed; Biomagnetic Imaging Laboratory Box 0628, 513 Parnassus Avenue, S362, San Francisco, CA 94143-0628, US; tel: (415) 476-6888, fax: (415) 502-4302, e-mail:
| | | | - Melissa Fisher
- Northern California Institute for Research and Education (NCIRE), San Francisco Veterans’ Affairs Medical Center, San Francisco, CA;,Department of Psychiatry, University of California, San Francisco, San Francisco, CA
| | - Alexander B. Herman
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA;,UC Berkeley – UC San Francisco Graduate Program in Bioengineering, San Francisco, CA
| | - Anne F. Dowling
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA
| | - Leighton B. Hinkley
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA
| | - Karuna Subramaniam
- Northern California Institute for Research and Education (NCIRE), San Francisco Veterans’ Affairs Medical Center, San Francisco, CA;,Department of Psychiatry, University of California, San Francisco, San Francisco, CA
| | - Srikantan S. Nagarajan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA;,UC Berkeley – UC San Francisco Graduate Program in Bioengineering, San Francisco, CA
| | - Sophia Vinogradov
- Northern California Institute for Research and Education (NCIRE), San Francisco Veterans’ Affairs Medical Center, San Francisco, CA;,Department of Psychiatry, University of California, San Francisco, San Francisco, CA
| |
Collapse
|
30
|
Haywood NR, Undurraga JA, Marquardt T, McAlpine D. A Comparison of Two Objective Measures of Binaural Processing: The Interaural Phase Modulation Following Response and the Binaural Interaction Component. Trends Hear 2015; 19:19/0/2331216515619039. [PMID: 26721925 PMCID: PMC4771038 DOI: 10.1177/2331216515619039] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)—the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural − (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations—differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC.
Collapse
Affiliation(s)
- Nicholas R Haywood
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - Jaime A Undurraga
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - Torsten Marquardt
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - David McAlpine
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| |
Collapse
|
31
|
Edgar JC, Fisk Iv CL, Berman JI, Chudnovskaya D, Liu S, Pandey J, Herrington JD, Port RG, Schultz RT, Roberts TPL. Auditory encoding abnormalities in children with autism spectrum disorder suggest delayed development of auditory cortex. Mol Autism 2015; 6:69. [PMID: 26719787 PMCID: PMC4696177 DOI: 10.1186/s13229-015-0065-5] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Accepted: 12/21/2015] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND Findings of auditory abnormalities in children with autism spectrum disorder (ASD) include delayed superior temporal gyrus auditory responses, pre- and post-stimulus superior temporal gyrus (STG) auditory oscillatory abnormalities, and atypical hemispheric lateralization. These abnormalities are likely associated with abnormal brain maturation. To better understand changes in brain activity as a function of age, the present study investigated associations between age and STG auditory time-domain and time-frequency neural activity. METHODS While 306-channel magnetoencephalography (MEG) data were recorded, 500- and 1000-Hz tones of 300-ms duration were binaurally presented. Evaluable data were obtained from 63 typically developing children (TDC) (6 to 14 years old) and 52 children with ASD (6 to 14 years old). T1-weighted structural MRI was obtained, and a source model created using single dipoles anatomically constrained to each participant's left and right STG. Using this source model, left and right 50-ms (M50), 100-ms (M100), and 200-ms (M200) time-domain and time-frequency measures (total power (TP) and inter-trial coherence (ITC)) were obtained. RESULTS Paired t tests showed a right STG M100 latency delay in ASD versus TDC (significant for right 500 Hz and marginally significant for right 1000 Hz). In the left and right STG, time-frequency analyses showed a greater pre- to post-stimulus increase in 4- to 16-Hz TP for both tones in ASD versus TDC after 150 ms. In the right STG, greater post-stimulus 4- to 16-Hz ITC for both tones was observed in TDC versus ASD after 200 ms. Analyses of age effects suggested M200 group differences that were due to a maturational delay in ASD, with left and right M200 decreasing with age in TDC but significantly less so in ASD. Additional evidence indicating delayed maturation of auditory cortex in ASD included atypical hemispheric functional asymmetries, including a right versus left M100 latency advantage in TDC but not ASD, and a stronger left than right M50 response in TDC but not ASD. CONCLUSIONS Present findings indicated maturational abnormalities in the development of primary/secondary auditory areas in children with ASD. It is hypothesized that a longitudinal investigation of the maturation of auditory network activity will indicate delayed development of each component of the auditory processing system in ASD.
Collapse
Affiliation(s)
- J Christopher Edgar
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Charles L Fisk Iv
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Jeffrey I Berman
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Darina Chudnovskaya
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Song Liu
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Juhi Pandey
- Center for Autism Research, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA USA
| | - John D Herrington
- Center for Autism Research, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA USA
| | - Russell G Port
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| | - Robert T Schultz
- Center for Autism Research, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA USA
| | - Timothy P L Roberts
- Lurie Family Foundations MEG Imaging Center, Department of Radiology, Children's Hospital of Philadelphia, 34th and Civic Center Blvd, Wood Building, Suite 2115, Philadelphia, PA 19104 USA
| |
Collapse
|
32
|
Schierholz I, Finke M, Schulte S, Hauthal N, Kantzke C, Rach S, Büchner A, Dengler R, Sandmann P. Enhanced audio–visual interactions in the auditory cortex of elderly cochlear-implant users. Hear Res 2015; 328:133-47. [DOI: 10.1016/j.heares.2015.08.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 08/12/2015] [Accepted: 08/19/2015] [Indexed: 11/29/2022]
|
33
|
Chen Z, Wong FCK, Jones JA, Li W, Liu P, Chen X, Liu H. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors. Sci Rep 2015; 5:13134. [PMID: 26278337 PMCID: PMC4538572 DOI: 10.1038/srep13134] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 07/20/2015] [Indexed: 11/28/2022] Open
Abstract
Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.
Collapse
Affiliation(s)
- Zhaocong Chen
- 1] Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China [2] Department of Rehabilitation Medicine, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510630, China
| | - Francis C K Wong
- Division of Linguistics and Multilingual Studies, School of Humanities and Social Sciences, Nanyang Technological University, 14 Nanyang Drive, HSS-03-49, 637332, Singapore
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Weifeng Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
34
|
Salminen NH, Takanen M, Santala O, Alku P, Pulkki V. Neural realignment of spatially separated sound components. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:3356-3365. [PMID: 26093425 DOI: 10.1121/1.4921605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Paavo Alku
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| |
Collapse
|
35
|
Horváth J. Action-related auditory ERP attenuation: Paradigms and hypotheses. Brain Res 2015; 1626:54-65. [PMID: 25843932 DOI: 10.1016/j.brainres.2015.03.038] [Citation(s) in RCA: 79] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 02/22/2015] [Accepted: 03/23/2015] [Indexed: 11/15/2022]
Abstract
A number studies have shown that the auditory N1 event-related potential (ERP) is attenuated when elicited by self-induced or self-generated sounds. Because N1 is a correlate of auditory feature- and event-detection, it was generally assumed that N1-attenuation reflected the cancellation of auditory re-afference, enabled by the internal forward modeling of the predictable sensory consequences of the given action. Focusing on paradigms utilizing non-speech actions, the present review summarizes recent progress on action-related auditory attenuation. Following a critical analysis of the most widely used, contingent paradigm, two further hypotheses on the possible causes of action-related auditory ERP attenuation are presented. The attention hypotheses suggest that auditory ERP attenuation is brought about by a temporary division of attention between the action and the auditory stimulation. The pre-activation hypothesis suggests that the attenuation is caused by the activation of a sensory template during the initiation of the action, which interferes with the incoming stimulation. Although each hypothesis can account for a number of findings, none of them can accommodate the whole spectrum of results. It is suggested that a better understanding of auditory ERP attenuation phenomena could be achieved by systematic investigations of the types of actions, the degree of action-effect contingency, and the temporal characteristics of action-effect contingency representation-buildup and -deactivation. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, HAS, P.O.B. 286, H-1519 Budapest, Hungary.
| |
Collapse
|
36
|
Sandmann P, Plotz K, Hauthal N, de Vos M, Schönfeld R, Debener S. Rapid bilateral improvement in auditory cortex activity in postlingually deafened adults following cochlear implantation. Clin Neurophysiol 2015; 126:594-607. [DOI: 10.1016/j.clinph.2014.06.029] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2014] [Revised: 05/20/2014] [Accepted: 06/09/2014] [Indexed: 01/12/2023]
|
37
|
Affiliation(s)
- John R. Hughes
- College of Medicine University of Illinois at Chicago, Chicago, Illinois 60612
| |
Collapse
|
38
|
Shuai L, Elhilali M. Task-dependent neural representations of salient events in dynamic auditory scenes. Front Neurosci 2014; 8:203. [PMID: 25100934 PMCID: PMC4104552 DOI: 10.3389/fnins.2014.00203] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2014] [Accepted: 06/27/2014] [Indexed: 11/13/2022] Open
Abstract
Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well.
Collapse
Affiliation(s)
| | - Mounya Elhilali
- Laboratory of Computational Audio Perception, Department of Electrical and Computer Engineering, Center for Speech and Language Processing, Johns Hopkins UniversityBaltimore, MD, USA
| |
Collapse
|
39
|
Tomé D, Barbosa F, Nowak K, Marques-Teixeira J. The development of the N1 and N2 components in auditory oddball paradigms: a systematic review with narrative analysis and suggested normative values. J Neural Transm (Vienna) 2014; 122:375-91. [DOI: 10.1007/s00702-014-1258-3] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 06/08/2014] [Indexed: 11/29/2022]
|
40
|
Larson E, Lee AKC. Potential Use of MEG to Understand Abnormalities in Auditory Function in Clinical Populations. Front Hum Neurosci 2014; 8:151. [PMID: 24659963 PMCID: PMC3952190 DOI: 10.3389/fnhum.2014.00151] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Accepted: 02/27/2014] [Indexed: 11/13/2022] Open
Abstract
Magnetoencephalography (MEG) provides a direct, non-invasive view of neural activity with millisecond temporal precision. Recent developments in MEG analysis allow for improved source localization and mapping of connectivity between brain regions, expanding the possibilities for using MEG as a diagnostic tool. In this paper, we first describe inverse imaging methods (e.g., minimum-norm estimation) and functional connectivity measures, and how they can provide insights into cortical processing. We then offer a perspective on how these techniques could be used to understand and evaluate auditory pathologies that often manifest during development. Here we focus specifically on how MEG inverse imaging, by providing anatomically based interpretation of neural activity, may allow us to test which aspects of cortical processing play a role in (central) auditory processing disorder [(C)APD]. Appropriately combining auditory paradigms with MEG analysis could eventually prove useful for a hypothesis-driven understanding and diagnosis of (C)APD or other disorders, as well as the evaluation of the effectiveness of intervention strategies.
Collapse
Affiliation(s)
- Eric Larson
- Institute for Learning and Brain Sciences, University of Washington , Seattle, WA , USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences, University of Washington , Seattle, WA , USA ; Department of Speech and Hearing Sciences, University of Washington , Seattle, WA , USA
| |
Collapse
|
41
|
Rotschafer SE, Razak KA. Auditory processing in fragile x syndrome. Front Cell Neurosci 2014; 8:19. [PMID: 24550778 PMCID: PMC3912505 DOI: 10.3389/fncel.2014.00019] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 01/12/2014] [Indexed: 11/24/2022] Open
Abstract
Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.
Collapse
Affiliation(s)
- Sarah E Rotschafer
- Graduate Neuroscience Program, Department of Psychology, University of California, Riverside, CA USA
| | - Khaleel A Razak
- Graduate Neuroscience Program, Department of Psychology, University of California, Riverside, CA USA
| |
Collapse
|
42
|
Alain C, Roye A, Salloum C. Effects of age-related hearing loss and background noise on neuromagnetic activity from auditory cortex. Front Syst Neurosci 2014; 8:8. [PMID: 24550790 PMCID: PMC3907769 DOI: 10.3389/fnsys.2014.00008] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 01/13/2014] [Indexed: 11/13/2022] Open
Abstract
Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and to segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs) using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL) were presented without, with low (45 dBA SPL), or with moderate (65 dBA SPL) Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than in the normal-hearing adults. In addition, the object-related negativity (ORN) elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada ; Department of Psychology, University of Toronto Toronto, ON, Canada ; Institute of Medical Sciences, University of Toronto Toronto, ON, Canada
| | - Anja Roye
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada
| | - Claire Salloum
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada
| |
Collapse
|
43
|
Behroozmand R, Ibrahim N, Korzyukov O, Robin DA, Larson CR. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch. Brain Cogn 2013; 84:97-108. [PMID: 24355545 DOI: 10.1016/j.bandc.2013.11.007] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 09/16/2013] [Accepted: 11/20/2013] [Indexed: 11/25/2022]
Abstract
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Nadine Ibrahim
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Oleg Korzyukov
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| | - Donald A Robin
- Research Imaging Institute, University of Texas Health Science Center San Antonio, San Antonio, TX 78229, United States
| | - Charles R Larson
- Speech Physiology Lab, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States.
| |
Collapse
|
44
|
Annic A, Bocquillon P, Bourriez JL, Derambure P, Dujardin K. Effects of stimulus-driven and goal-directed attention on prepulse inhibition of the cortical responses to an auditory pulse. Clin Neurophysiol 2013; 125:1576-88. [PMID: 24411526 DOI: 10.1016/j.clinph.2013.12.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2013] [Revised: 11/30/2013] [Accepted: 12/06/2013] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Inhibition by a prepulse (prepulse inhibition, PPI) of the response to a startling acoustic pulse is modulated by attention. We sought to determine whether goal-directed and stimulus-driven attention differentially modulate (i) PPI of the N100 and P200 components of the auditory evoked potential (AEP) and (ii) the components' generators. METHODS 128-channel electroencephalograms were recorded in 26 healthy controls performing an active acoustic PPI paradigm. Startling stimuli were presented alone or either 400 or 1000ms after a visual prepulse. Three types of prepulse were used: to-be-attended (goal-directed attention), unexpected (stimulus-driven attention) or to-be ignored (non focused attention). We calculated the percentage PPI for the N100 and P200 components of the AEP and determined cortical generators by standardized weighted low resolution tomography. RESULTS At 400ms, the PPI of the N100 was greater after an unexpected prepulse than after a to-be-attended prepulse, the PPI of the P200 was greater after a to-be-attended prepulse than after a to-be ignored prepulse. At 1000ms, to-be-attended and unexpected prepulses had similar effects. Cortical sources were modulated in areas involved in both types of attention. CONCLUSIONS Stimulus-driven attention and goal-directed attention each have specific effects on the attentional modulation of PPI. SIGNIFICANCE By using a new PPI paradigm that specifically controls attention, we demonstrated that the early stages of the gating process (as evidenced by N100) are influenced by stimulus-driven attention and that the late stages (as evidenced by P200) are influenced by goal-directed attention.
Collapse
Affiliation(s)
- Agnès Annic
- Université Lille Nord de France, EA1046 Lille, France; Department of Clinical Neurophysiology, Lille University Medical Center, Lille, France.
| | - Perrine Bocquillon
- Université Lille Nord de France, EA1046 Lille, France; Department of Clinical Neurophysiology, Lille University Medical Center, Lille, France
| | - Jean-Louis Bourriez
- Department of Clinical Neurophysiology, Lille University Medical Center, Lille, France
| | - Philippe Derambure
- Université Lille Nord de France, EA1046 Lille, France; Department of Clinical Neurophysiology, Lille University Medical Center, Lille, France
| | - Kathy Dujardin
- Université Lille Nord de France, EA1046 Lille, France; Department of Neurology and Movement Disorders, Lille University Medical Center, Lille, France
| |
Collapse
|
45
|
Getzmann S, Falkenstein M, Gajewski PD. Long-term cardiovascular fitness is associated with auditory attentional control in old adults: neuro-behavioral evidence. PLoS One 2013; 8:e74539. [PMID: 24023949 PMCID: PMC3762815 DOI: 10.1371/journal.pone.0074539] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 08/06/2013] [Indexed: 11/18/2022] Open
Abstract
It has been shown that healthy aging affects the ability to focus attention on a given task and to ignore distractors. Here, we asked whether long-term physical activity is associated with lower susceptibility to distraction of auditory attention, and how physically active and inactive seniors may differ regarding subcomponents of auditory attention. An auditory duration discrimination task was employed, and involuntary attentional shifts to task-irrelevant rare frequency deviations and subsequent reorientation were studied by analysis of behavioral data and event-related potential measures. The frequency deviations impaired performance more in physically inactive than active seniors. This was accompanied by a stronger frontal positivity (P3a) and increased activation of anterior cingulate cortex, suggesting a stronger involuntary shift of attention towards task-irrelevant stimulus features in inactive compared to active seniors. These results indicate a positive relationship between physical fitness and attentional control in elderly, presumably due to more focused attentional resources and enhanced inhibition of irrelevant stimulus features.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University of Dortmund (IfADo), Dortmund, Germany
- * E-mail:
| | - Michael Falkenstein
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University of Dortmund (IfADo), Dortmund, Germany
| | - Patrick D. Gajewski
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University of Dortmund (IfADo), Dortmund, Germany
| |
Collapse
|
46
|
Kawase T, Kanno A, Takata Y, Nakasato N, Kawashima R, Kobayashi T. Positive auditory cortical responses in patients with absent brainstem response. Clin Neurophysiol 2013; 125:148-53. [PMID: 23895952 DOI: 10.1016/j.clinph.2013.06.184] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 06/20/2013] [Accepted: 06/29/2013] [Indexed: 11/19/2022]
Abstract
OBJECTIVE To compare the detectability of the different auditory evoked responses in patients with retrocochlear lesion. METHODS The 40-Hz auditory steady state response (ASSR) and the N1m auditory cortical response were examined by magnetoencephalography in 4 patients with vestibular schwannoma, in whom the auditory brainstem response (ABR) was absent. RESULTS Apparent N1m responses were observed despite total absence of the ABR or absence except for small wave I in all patients, although the latency of N1m was delayed in most patients. On the other hand, clear ASSFs could be observed only in one patient. Very small 40-Hz ASSFs could be detected in 2 patients (amplitude less than 1fT), but no apparent ASSFs were observed in one patient, in whom maximum speech intelligibility was extremely low and the latency of N1m was most prolonged. CONCLUSION The N1m response and 40-Hz ASSR could be detected in patients with absent ABR, but the N1m response appeared to be more detectable than the 40-Hz ASSR. SIGNIFICANCE Combined assessment with several different evoked responses may be useful to evaluate the disease conditions of patients with retrocochlear lesions.
Collapse
Affiliation(s)
- Tetsuaki Kawase
- Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8574, Japan; Department of Audiology, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8574, Japan; Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8574, Japan.
| | - Akitake Kanno
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575, Japan; MEG Laboratory, Kohnan Hospital, 4-20-1 Nagamachi-minami, Taihaku-ku, Sendai 982-8523, Japan
| | - Yusuke Takata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8574, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8575, Japan; Department of Electromagnetic Neurophysiology, Smart Ageing International Research Center, Institute of Development, Aging and Cancer, Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575, Japan
| | - Toshimitsu Kobayashi
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai 980-8574, Japan
| |
Collapse
|
47
|
Yu HY, Chen JT, Wu ZA, Yeh TC, Ho LT, Lin YY. Side of the stimulated ear influences the hemispheric balance in coding tonal stimuli. Neurol Res 2013; 29:517-22. [PMID: 17535555 DOI: 10.1179/016164107x164157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVE To evaluate whether the side of stimulated ear affects the hemispheric asymmetry of auditory evoked cortical activations. METHODS Using a whole-head neuromagnetometer, we recorded neuromagnetic approximately 100 ms responses (N100m) in 21 healthy right-handers to 100 ms 1 kHz tones delivered alternatively to left and right ear. RESULTS Although the peak latencies of N100m were shorter in contralateral than in ipsilateral hemisphere, the difference was significant only for the left ear stimulation. Based on the relative N100m amplitudes across hemispheres, the laterality evaluation showed a rightward predominance of N100m activation to tone stimuli, but the lateralization toward the right hemisphere was more apparent by the left than by the right ear stimulation (laterality index: -0.27 versus -0.10, p=0.008). Within the right hemisphere, the N100m was 2-4 mm more posterior for left ear than for right ear stimulation. CONCLUSIONS The hemispheric asymmetry in auditory processing depends on the side of the stimulated ear. The more anterior localization of right N100m responses to ipsilateral than to contralateral ear stimulation suggests that there might be differential neuronal populations in the right hemisphere for processing spatially different auditory inputs.
Collapse
Affiliation(s)
- Hsiang-Yu Yu
- Department of Neurology, National Yang-Ming University, Taipei, Taiwan
| | | | | | | | | | | |
Collapse
|
48
|
Lee AKC, Larson E, Maddox RK, Shinn-Cunningham BG. Using neuroimaging to understand the cortical mechanisms of auditory selective attention. Hear Res 2013; 307:111-20. [PMID: 23850664 DOI: 10.1016/j.heares.2013.06.010] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Revised: 06/20/2013] [Accepted: 06/25/2013] [Indexed: 11/30/2022]
Abstract
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the "cocktail party" problem. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Adrian K C Lee
- Institute for Learning and Brain Sciences, University of Washington, WA 98195, USA; Department of Speech & Hearing Sciences, University of Washington, Seattle, WA 98195, USA.
| | | | | | | |
Collapse
|
49
|
Scheerer N, Behich J, Liu H, Jones J. ERP correlates of the magnitude of pitch errors detected in the human voice. Neuroscience 2013; 240:176-85. [DOI: 10.1016/j.neuroscience.2013.02.054] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2012] [Revised: 02/17/2013] [Accepted: 02/20/2013] [Indexed: 11/16/2022]
|
50
|
Lateralized auditory brain function in children with normal reading ability and in children withdyslexia. Neuropsychologia 2013; 51:633-41. [DOI: 10.1016/j.neuropsychologia.2012.12.015] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Revised: 12/08/2012] [Accepted: 12/17/2012] [Indexed: 11/22/2022]
|