1
|
Abe M, Tabei KI, Satoh M, Fukuda M, Daikuhara H, Shiga M, Kida H, Tomimoto H. Impairment of the Missing Fundamental Phenomenon in Individuals with Alzheimer’s Disease: A Neuropsychological and Voxel-Based Morphometric Study. Dement Geriatr Cogn Dis Extra 2018. [PMID: 29515620 PMCID: PMC5836147 DOI: 10.1159/000486331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Background/Aims The missing fundamental phenomenon (MFP) is a universal pitch perception illusion that occurs in animals and humans. In this study, we aimed to determine whether the MFP is impaired in patients with Alzheimer's disease (AD) using an auditory pitch perception experiment. We further examined anatomical correlates of the MFP in patients with AD by measuring gray matter volume (GMV) on magnetic resonance images via voxel-based morphometric analysis. Methods We prospectively enrolled 29 patients with AD and 20 healthy older adults. Auditory stimuli included 12 melodies of Japanese nursery songs that were expected to be familiar to participants. We constructed the melodies using pure and missing fundamental tones (MFTs). Results Patients with AD exhibited significantly poorer performance on the MFT task than healthy controls. MFT scores were positively correlated with GMV in the bilateral insula and temporal poles, left inferior frontal gyrus, right entorhinal cortex, and right cerebellum. Conclusions These results suggest that impairments in the MFP represent a manifestation of the degeneration of auditory-related brain regions in AD. Further studies are required to more fully elucidate the neural mechanisms underlying auditory impairments in patients with AD and related dementia disorders.
Collapse
Affiliation(s)
- Makiko Abe
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
| | - Ken-ichi Tabei
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
- bDepartment of Neurology, Graduate School of Medicine, Mie University, Mie, Japan
- *Ken-ichi Tabei and Masayuki Satoh, Mie University, 2-174 Edobashi Tsu-shi, Mie 514-8507 (Japan), E-Mail (K.T.) and (M.S.)
| | - Masayuki Satoh
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
| | - Mari Fukuda
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
| | | | - Mariko Shiga
- dMie Prefectural Dementia-Related Disease Medical Center, Mie, Japan
| | - Hirotaka Kida
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
| | - Hidekazu Tomimoto
- aDepartment of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie University, Mie, Japan
- bDepartment of Neurology, Graduate School of Medicine, Mie University, Mie, Japan
| |
Collapse
|
2
|
Perrin F, Castro M, Tillmann B, Luauté J. Promoting the use of personally relevant stimuli for investigating patients with disorders of consciousness. Front Psychol 2015; 6:1102. [PMID: 26284020 PMCID: PMC4519656 DOI: 10.3389/fpsyg.2015.01102] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 07/17/2015] [Indexed: 11/21/2022] Open
Abstract
Sensory stimuli are used to evaluate and to restore cognitive functions and consciousness in patients with a disorder of consciousness (DOC) following a severe brain injury. Although sophisticated protocols can help assessing higher order cognitive functions and awareness, one major drawback is their lack of sensitivity. The aim of the present review is to show that stimulus selection is crucial for an accurate evaluation of the state of patients with disorders of consciousness as it determines the levels of processing that the patient can have with stimulation from his/her environment. The probability to observe a behavioral response or a cerebral response is increased when her/his personal history and/or her/his personal preferences are taken into account. We show that personally relevant stimuli (i.e., with emotional, autobiographical, or self-related characteristics) are associated with clearer signs of perception than are irrelevant stimuli in patients with DOC. Among personally relevant stimuli, music appears to be a promising clinical tool as it boosts perception and cognition in patients with DOC and could also serve as a prognostic tool. We suggest that the effect of music on cerebral processes in patients might reflect the music's capacity to act both on the external and internal neural networks supporting consciousness.
Collapse
Affiliation(s)
- Fabien Perrin
- Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center (UCBL, CNRS UMR5292, Inserm U1028)Lyon, France
| | - Maïté Castro
- Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center (UCBL, CNRS UMR5292, Inserm U1028)Lyon, France
| | - Barbara Tillmann
- Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center (UCBL, CNRS UMR5292, Inserm U1028)Lyon, France
| | - Jacques Luauté
- Henry Gabrielle Hospital, Hospices Civils de LyonLyon, France
- Neurological Hospital, Hospices Civils de LyonLyon, France
- IMPACT, Lyon Neuroscience Research Center (UCBL, CNRS UMR5292, Inserm U1028)Lyon, France
| |
Collapse
|
3
|
Xu M, Homae F, Hashimoto RI, Hagiwara H. Acoustic cues for the recognition of self-voice and other-voice. Front Psychol 2013; 4:735. [PMID: 24133475 PMCID: PMC3795466 DOI: 10.3389/fpsyg.2013.00735] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 09/22/2013] [Indexed: 11/13/2022] Open
Abstract
Self-recognition, being indispensable for successful social communication, has become a major focus in current social neuroscience. The physical aspects of the self are most typically manifested in the face and voice. Compared with the wealth of studies on self-face recognition, self-voice recognition (SVR) has not gained much attention. Converging evidence has suggested that the fundamental frequency (F0) and formant structures serve as the key acoustic cues for other-voice recognition (OVR). However, little is known about which, and how, acoustic cues are utilized for SVR as opposed to OVR. To address this question, we independently manipulated the F0 and formant information of recorded voices and investigated their contributions to SVR and OVR. Japanese participants were presented with recorded vocal stimuli and were asked to identify the speaker-either themselves or one of their peers. Six groups of 5 peers of the same sex participated in the study. Under conditions where the formant information was fully preserved and where only the frequencies lower than the third formant (F3) were retained, accuracies of SVR deteriorated significantly with the modulation of the F0, and the results were comparable for OVR. By contrast, under a condition where only the frequencies higher than F3 were retained, the accuracy of SVR was significantly higher than that of OVR throughout the range of F0 modulations, and the F0 scarcely affected the accuracies of SVR and OVR. Our results indicate that while both F0 and formant information are involved in SVR, as well as in OVR, the advantage of SVR is manifested only when major formant information for speech intelligibility is absent. These findings imply the robustness of self-voice representation, possibly by virtue of auditory familiarity and other factors such as its association with motor/articulatory representation.
Collapse
Affiliation(s)
| | | | | | - Hiroko Hagiwara
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan UniversityTokyo, Japan
| |
Collapse
|
4
|
Abstract
To enhance weak sounds while compressing the dynamic intensity range, auditory sensory cells amplify sound-induced vibrations in a nonlinear, intensity-dependent manner. In the course of this process, instantaneous waveform distortion is produced, with two conspicuous kinds of interwoven consequences, the introduction of new sound frequencies absent from the original stimuli, which are audible and detectable in the ear canal as otoacoustic emissions, and the possibility for an interfering sound to suppress the response to a probe tone, thereby enhancing contrast among frequency components. We review how the diverse manifestations of auditory nonlinearity originate in the gating principle of their mechanoelectrical transduction channels; how they depend on the coordinated opening of these ion channels ensured by connecting elements; and their links to the dynamic behavior of auditory sensory cells. This paper also reviews how the complex properties of waves traveling through the cochlea shape the manifestations of auditory nonlinearity. Examination methods based on the detection of distortions open noninvasive windows on the modes of activity of mechanosensitive structures in auditory sensory cells and on the distribution of sites of nonlinearity along the cochlear tonotopic axis, helpful for deciphering cochlear molecular physiology in hearing-impaired animal models. Otoacoustic emissions enable fast tests of peripheral sound processing in patients. The study of auditory distortions also contributes to the understanding of the perception of complex sounds.
Collapse
Affiliation(s)
- Paul Avan
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| | - Béla Büki
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| | - Christine Petit
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| |
Collapse
|
5
|
Greenlee JDW, Behroozmand R, Larson CR, Jackson AW, Chen F, Hansen DR, Oya H, Kawasaki H, Howard MA. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex. PLoS One 2013; 8:e60783. [PMID: 23577157 PMCID: PMC3620048 DOI: 10.1371/journal.pone.0060783] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Accepted: 03/02/2013] [Indexed: 11/29/2022] Open
Abstract
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Collapse
Affiliation(s)
- Jeremy D W Greenlee
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, Iowa, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
6
|
Behroozmand R, Korzyukov O, Larson CR. ERP correlates of pitch error detection in complex tone and voice auditory feedback with missing fundamental. Brain Res 2012; 1448:89-100. [PMID: 22386045 PMCID: PMC3309166 DOI: 10.1016/j.brainres.2012.02.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2011] [Revised: 02/02/2012] [Accepted: 02/05/2012] [Indexed: 10/28/2022]
Abstract
Previous studies have shown that the pitch of a sound is perceived in the absence of its fundamental frequency (F0), suggesting that a distinct mechanism may resolve pitch based on a pattern that exists between harmonic frequencies. The present study investigated whether such a mechanism is active during voice pitch control. ERPs were recorded in response to +200 cents pitch shifts in the auditory feedback of self-vocalizations and complex tones with and without the F0. The absence of the fundamental induced no difference in ERP latencies. However, a right-hemisphere difference was found in the N1 amplitudes with larger responses to complex tones that included the fundamental compared to when it was missing. The P1 and N1 latencies were shorter in the left hemisphere, and the N1 and P2 amplitudes were larger bilaterally for pitch shifts in voice and complex tones compared with pure tones. These findings suggest hemispheric differences in neural encoding of pitch in sounds with missing fundamental. Data from the present study suggest that the right cortical auditory areas, thought to be specialized for spectral processing, may utilize different mechanisms to resolve pitch in sounds with missing fundamental. The left hemisphere seems to perform faster processing to resolve pitch based on the rate of temporal variations in complex sounds compared with pure tones. These effects indicate that the differential neural processing of pitch in the left and right hemispheres may enable the audio-vocal system to detect temporal and spectral variations in the auditory feedback for vocal pitch control.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Speech Physiology Lab, Department of Communication Sciences and Disorders Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | | | |
Collapse
|
7
|
Abstract
Background There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into music; and most of them are mainly based on spectra feature of EEG. Methodology/Principal Findings In this study, audibly recognizable scale-free music was deduced from individual Electroencephalogram (EEG) waveforms. The translation rules include the direct mapping from the period of an EEG waveform to the duration of a note, the logarithmic mapping of the change of average power of EEG to music intensity according to the Fechner's law, and a scale-free based mapping from the amplitude of EEG to music pitch according to the power law. To show the actual effect, we applied the deduced sonification rules to EEG segments recorded during rapid-eye movement sleep (REM) and slow-wave sleep (SWS). The resulting music is vivid and different between the two mental states; the melody during REM sleep sounds fast and lively, whereas that in SWS sleep is slow and tranquil. 60 volunteers evaluated 25 music pieces, 10 from REM, 10 from SWS and 5 from white noise (WN), 74.3% experienced a happy emotion from REM and felt boring and drowsy when listening to SWS, and the average accuracy for all the music pieces identification is 86.8%(κ = 0.800, P<0.001). We also applied the method to the EEG data from eyes closed, eyes open and epileptic EEG, and the results showed these mental states can be identified by listeners. Conclusions/Significance The sonification rules may identify the mental states of the brain, which provide a real-time strategy for monitoring brain activities and are potentially useful to neurofeedback therapy.
Collapse
|
8
|
Customised cytoarchitectonic probability maps using deformable registration: primary auditory cortex. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2008. [PMID: 18044637 DOI: 10.1007/978-3-540-75759-7_92] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
A novel method is presented for creating a probability map from histologically defined cytoarchitectonic data, customised for the anatomy of individual fMRI volunteers. Postmortem structural and cytoarchitectonic information from a published dataset is combined with high resolution structural MR images using deformable registration of a region of interest. In this paper, we have targeted the three sub-areas of the primary auditory cortex (located on Heschl's gyrus); however, the method could be applied to any other cytoarchitectonic region. The resulting probability maps show a significantly higher overlap than previously generated maps using the same cytoarchitectonic data, and more accurately span the macroanatomical structure of the auditory cortex. This improvement indicates a high potential for spatially accurate fMRI analysis, allowing more reliable correlation between anatomical structure and function. We validate the approach using fMRI data from nine individuals, taken from a published dataset. We compare activation for stimuli evoking a pitch percept to activation for acoustically matched noise, and demonstrate that the primary auditory cortex (Te1.0) and the lateral region Te1.2 are sensitive to pitch, whereas Te1.1 is not.
Collapse
|
9
|
The role of frequency, phase and time for processing of amplitude modulated signals by grasshoppers. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2007; 194:221-33. [PMID: 18043922 DOI: 10.1007/s00359-007-0295-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2007] [Revised: 10/24/2007] [Accepted: 11/10/2007] [Indexed: 10/22/2022]
Abstract
Acoustic signals consist of pressure changes over time and can thus be analyzed in the frequency- or in the time-domain. With behavioural experiments we investigated which frequency components (FC) are necessary for the recognition of the periodic envelope of the conspecific song by females of the grasshopper Chorthippus biguttulus. Further, we determined up to which frequency component phase information is required which would indicate processing in the time domain. Responses of females revealed that signals composed of FC between 10 and 50 Hz are sufficient for recognition of the song envelope. A systematic reduction in the number of FC showed that no single frequency component was required; signals without the fundamental frequency were still highly attractive and only three FC may be sufficient for song recognition. Phase changes for frequencies up to 40 Hz strongly changed the attractiveness of song signals but only little at 50 Hz. Females were also tested with rectangular signals in which pause duration was varied. Evidently, and despite the high attractiveness of song signals with a "missing fundamental", females evaluated the attractiveness of signals in the time-domain, since the selectivity for pause duration predicted the responses to signals composed from FC well.
Collapse
|