1
|
Sun Y, Ming L, Sun J, Guo F, Li Q, Hu X. Brain mechanism of unfamiliar and familiar voice processing: an activation likelihood estimation meta-analysis. PeerJ 2023; 11:e14976. [PMID: 36935917 PMCID: PMC10019337 DOI: 10.7717/peerj.14976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 02/08/2023] [Indexed: 03/14/2023] Open
Abstract
Interpersonal communication through vocal information is very important for human society. During verbal interactions, our vocal cord vibrations convey important information regarding voice identity, which allows us to decide how to respond to speakers (e.g., neither greeting a stranger too warmly or speaking too coldly to a friend). Numerous neural studies have shown that identifying familiar and unfamiliar voices may rely on different neural bases. However, the mechanism underlying voice identification of individuals of varying familiarity has not been determined due to vague definitions, confusion of terms, and differences in task design. To address this issue, the present study first categorized three kinds of voice identity processing (perception, recognition and identification) from speakers with different degrees of familiarity. We defined voice identity perception as passively listening to a voice or determining if the voice was human, voice identity recognition as determining if the sound heard was acoustically familiar, and voice identity identification as ascertaining whether a voice is associated with a name or face. Of these, voice identity perception involves processing unfamiliar voices, and voice identity recognition and identification involves processing familiar voices. According to these three definitions, we performed activation likelihood estimation (ALE) on 32 studies and revealed different brain mechanisms underlying processing of unfamiliar and familiar voice identities. The results were as follows: (1) familiar voice recognition/identification was supported by a network involving most regions in the temporal lobe, some regions in the frontal lobe, subcortical structures and regions around the marginal lobes; (2) the bilateral superior temporal gyrus was recruited for voice identity perception of an unfamiliar voice; (3) voice identity recognition/identification of familiar voices was more likely to activate the right frontal lobe than voice identity perception of unfamiliar voices, while voice identity perception of an unfamiliar voice was more likely to activate the bilateral temporal lobe and left frontal lobe; and (4) the bilateral superior temporal gyrus served as a shared neural basis of unfamiliar voice identity perception and familiar voice identity recognition/identification. In general, the results of the current study address gaps in the literature, provide clear definitions of concepts, and indicate brain mechanisms for subsequent investigations.
Collapse
|
2
|
Rinke P, Schmidt T, Beier K, Kaul R, Scharinger M. Rapid pre-attentive processing of a famous speaker: Electrophysiological effects of Angela Merkel's voice. Neuropsychologia 2022; 173:108312. [PMID: 35781011 DOI: 10.1016/j.neuropsychologia.2022.108312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 11/18/2022]
Abstract
The recognition of human speakers by their voices is a remarkable cognitive ability. Previous research has established a voice area in the right temporal cortex involved in the integration of speaker-specific acoustic features. This integration appears to occur rapidly, especially in case of familiar voices. However, the exact time course of this process is less well understood. To this end, we here investigated the automatic change detection response of the human brain while listening to the famous voice of German chancellor Angela Merkel, embedded in the context of acoustically matched voices. A classic passive oddball paradigm contrasted short word stimuli uttered by Merkel with word stimuli uttered by two unfamiliar female speakers. Electrophysiological voice processing indices from 21 participants were quantified as mismatch negativities (MMNs) and P3a differences. Cortical sources were approximated by variable resolution electromagnetic tomography. The results showed amplitude and latency effects for both MMN and P3a: The famous (familiar) voice elicited a smaller but earlier MMN than the unfamiliar voices. The P3a, by contrast, was both larger and later for the familiar than for the unfamiliar voices. Familiar-voice MMNs originated from right-hemispheric regions in temporal cortex, overlapping with the temporal voice area, while unfamiliar-voice MMNs stemmed from left superior temporal gyrus. These results suggest that the processing of a very famous voice relies on pre-attentive right temporal processing within the first 150 ms of the acoustic signal. The findings further our understanding of the neural dynamics underlying familiar voice processing.
Collapse
Affiliation(s)
- Paula Rinke
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany; Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany
| | - Tatjana Schmidt
- Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany; Faculté de biologie et de médecine, University of Lausanne, Switzerland
| | - Kjartan Beier
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany
| | - Ramona Kaul
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Germany; Research Center »Deutscher Sprachatlas«, Philipps-University Marburg, Germany; Center for Mind, Brain & Behavior, Universities of Marburg & Gießen, Germany.
| |
Collapse
|
3
|
Di Dona G, Scaltritti M, Sulpizio S. Early differentiation of memory retrieval processes for newly learned voices and phonemes as indexed by the MMN. BRAIN AND LANGUAGE 2021; 220:104981. [PMID: 34166941 DOI: 10.1016/j.bandl.2021.104981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 06/13/2023]
Abstract
Linguistic and vocal information are thought to be differentially processed since the early stages of speech perception, but it remains unclear if this differentiation also concerns automatic processes of memory retrieval. The aim of this ERP study was to compare the automatic retrieval processes for newly learned voices vs phonemes. In a longitudinal experiment, two groups of participants were trained in learning either a new phoneme or a new voice. The MMN elicited by the presentation of the two was measured before and after the training. An enhanced MMN was elicited by the presentation of the learned phoneme, reflecting the activation of an automatic memory retrieval process. Instead, a reduced MMN was elicited by the learned voice, indicating that the voice was perceived as a typical member of the learned voice identity. This suggests that the automatic processes that retrieve linguistic and vocal information are differently affected by experience.
Collapse
Affiliation(s)
- Giuseppe Di Dona
- Dipartimento di Psicologia e Scienze Cognitive, Università degli Studi di Trento, Corso Bettini 84, 38068 Rovereto (TN), Italy.
| | - Michele Scaltritti
- Dipartimento di Psicologia e Scienze Cognitive, Università degli Studi di Trento, Corso Bettini 84, 38068 Rovereto (TN), Italy.
| | - Simone Sulpizio
- Dipartimento di Psicologia, Università degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milano (MI), Italy; Milan Center for Neuroscience (NeuroMi), Università degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milano (MI), Italy.
| |
Collapse
|
4
|
Holmes E, Johnsrude IS. Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar. Neuroimage 2021; 237:118107. [PMID: 33933598 DOI: 10.1016/j.neuroimage.2021.118107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 04/19/2021] [Accepted: 04/25/2021] [Indexed: 10/21/2022] Open
Abstract
When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.
Collapse
Affiliation(s)
- Emma Holmes
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada; School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario, London, N6G 1H1, Canada
| |
Collapse
|
5
|
Zimmermann J, Ross B, Moscovitch M, Alain C. Neural dynamics supporting auditory long-term memory effects on target detection. Neuroimage 2020; 218:116979. [PMID: 32447014 DOI: 10.1016/j.neuroimage.2020.116979] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 05/15/2020] [Accepted: 05/18/2020] [Indexed: 12/31/2022] Open
Abstract
Auditory long-term memory has been shown to facilitate signal detection. However, the nature and timing of the cognitive processes supporting such benefits remain equivocal. We measured neuroelectric brain activity while young adults were presented with a contextual memory cue designed to assist with the detection of a faint pure tone target embedded in an audio clip of an everyday environmental scene (e.g., the soundtrack of a restaurant). During an initial familiarization task, participants heard such audio clips, half of which included a target sound (memory cue trials) at a specific time and location (left or right ear), as well as audio clips without a target (neutral trials). Following a 1-h or 24-h retention interval, the same audio clips were presented, but now all included a target. Participants were asked to press a button as soon as they heard the pure tone target. Overall, participants were faster and more accurate during memory than neutral cue trials. The auditory contextual memory effects on performance coincided with three temporally and spatially distinct neural modulations, which encompassed changes in the amplitude of event-related potential as well as changes in theta, alpha, beta and gamma power. Brain electrical source analyses revealed greater source activity in memory than neutral cue trials in the right superior temporal gyrus and left parietal cortex. Conversely, neutral trials were associated with greater source activity than memory cue trials in the left posterior medial temporal lobe. Target detection was associated with increased negativity (N2), and a late positive (P3b) wave at frontal and parietal sites, respectively. The effect of auditory contextual memory on brain activity preceding target onset showed little lateralization. Together, these results are consistent with contextual memory facilitating retrieval of target-context associations and deployment and management of auditory attentional resources to when the target occurred. The results also suggest that the auditory cortices, parietal cortex, and medial temporal lobe may be parts of a neural network enabling memory-guided attention during auditory scene analysis.
Collapse
Affiliation(s)
- Jacqueline Zimmermann
- Rotman Research Institute, Psychology, University of Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Ontario, Canada
| | - Bernhard Ross
- Rotman Research Institute, Psychology, University of Toronto, Ontario, Canada; Department of Medical Biophysics, University of Toronto, Ontario, Canada; Institute of Medical Sciences, University of Toronto, Ontario, Canada
| | - Morris Moscovitch
- Rotman Research Institute, Psychology, University of Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Psychology, University of Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Ontario, Canada; Institute of Medical Sciences, University of Toronto, Ontario, Canada; Faculty of Music, University of Toronto, Ontario, Canada.
| |
Collapse
|
6
|
Liu P, Cole PM, Gilmore RO, Pérez-Edgar KE, Vigeant MC, Moriarty P, Scherf KS. Young children's neural processing of their mother's voice: An fMRI study. Neuropsychologia 2019; 122:11-19. [PMID: 30528586 PMCID: PMC6334756 DOI: 10.1016/j.neuropsychologia.2018.12.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 11/13/2018] [Accepted: 12/03/2018] [Indexed: 12/20/2022]
Abstract
In addition to semantic content, human speech carries paralinguistic information that conveys important social cues such as a speaker's identity. For young children, their own mothers' voice is one of the most salient vocal inputs in their daily environment. Indeed, qualities of mothers' voices are shown to contribute to children's social development. Our knowledge of how the mother's voice is processed at the neural level, however, is limited. This study investigated whether the voice of a mother modulates activation in the network of regions activated by the human voice in young children differently than the voice of an unfamiliar mother. We collected fMRI data from 32 typically developing 7- and 8-year-olds as they listened to natural speech produced by their mother and another child's mother. We used emotionally-varied natural speech stimuli to approximate the range of children's day-to-day experience. We individually-defined functional ROIs in children's voice-sensitive neural network and then independently investigated the extent to which activation in these regions is modulated by speaker identity. The bilateral posterior auditory cortex, superior temporal gyrus (STG), and inferior frontal gyrus (IFG) exhibit enhanced activation in response to the voice of one's own mother versus that of an unfamiliar mother. The findings indicate that children process the voice of their own mother uniquely, and pave the way for future studies of how social information processing contributes to the trajectory of child social development.
Collapse
Affiliation(s)
- Pan Liu
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Pamela M Cole
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA.
| | - Rick O Gilmore
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Koraly E Pérez-Edgar
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - Peter Moriarty
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - K Suzanne Scherf
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
7
|
Cervantes Constantino F, Simon JZ. Restoration and Efficiency of the Neural Processing of Continuous Speech Are Promoted by Prior Knowledge. Front Syst Neurosci 2018; 12:56. [PMID: 30429778 PMCID: PMC6220042 DOI: 10.3389/fnsys.2018.00056] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 10/09/2018] [Indexed: 11/13/2022] Open
Abstract
Sufficiently noisy listening conditions can completely mask the acoustic signal of significant parts of a sentence, and yet listeners may still report the perception of hearing the masked speech. This occurs even when the speech signal is removed entirely, if the gap is filled with stationary noise, a phenomenon known as perceptual restoration. At the neural level, however, it is unclear the extent to which the neural representation of missing extended speech sequences is similar to the dynamic neural representation of ordinary continuous speech. Using auditory magnetoencephalography (MEG), we show that stimulus reconstruction, a technique developed for use with neural representations of ordinary speech, works also for the missing speech segments replaced by noise, even when spanning several phonemes and words. The reconstruction fidelity of the missing speech, up to 25% of what would be attained if present, depends however on listeners' familiarity with the missing segment. This same familiarity also speeds up the most prominent stage of the cortical processing of ordinary speech by approximately 5 ms. Both effects disappear when listeners have no or little prior experience with the speech segment. The results are consistent with adaptive expectation mechanisms that consolidate detailed representations about speech sounds as identifiable factors assisting automatic restoration over ecologically relevant timescales.
Collapse
Affiliation(s)
| | - Jonathan Z. Simon
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, College Park, MD, United States
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
8
|
Maguinness C, Roswandowitz C, von Kriegstein K. Understanding the mechanisms of familiar voice-identity recognition in the human brain. Neuropsychologia 2018; 116:179-193. [DOI: 10.1016/j.neuropsychologia.2018.03.039] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Revised: 03/28/2018] [Accepted: 03/29/2018] [Indexed: 11/26/2022]
|
9
|
Roswandowitz C, Kappes C, Obrig H, von Kriegstein K. Obligatory and facultative brain regions for voice-identity recognition. Brain 2018; 141:234-247. [PMID: 29228111 PMCID: PMC5837691 DOI: 10.1093/brain/awx313] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 08/31/2017] [Accepted: 10/11/2017] [Indexed: 11/26/2022] Open
Abstract
Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Claudia Kappes
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Hellmuth Obrig
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Clinic for Cognitive Neurology, University Hospital Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Humboldt University zu Berlin, Rudower Chaussee 18, 12489 Berlin, Germany
- Technische Universität Dresden, Faculty of Psychology, Bamberger Str. 7, 01187 Dresden, Germany
| |
Collapse
|
10
|
Bethmann A, Brechmann A. On the definition and interpretation of voice selective activation in the temporal cortex. Front Hum Neurosci 2014; 8:499. [PMID: 25071527 PMCID: PMC4086026 DOI: 10.3389/fnhum.2014.00499] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 06/19/2014] [Indexed: 11/15/2022] Open
Abstract
Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes.
Collapse
Affiliation(s)
- Anja Bethmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - André Brechmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
11
|
Graux J, Gomot M, Roux S, Bonnet-Brilhault F, Bruneau N. Is my voice just a familiar voice? An electrophysiological study. Soc Cogn Affect Neurosci 2014; 10:101-5. [PMID: 24625786 DOI: 10.1093/scan/nsu031] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
It is not clear whether self-stimuli are processed by the brain as highly familiar overlearned stimuli or as self-specific stimuli. This study examined the neural processes underlying discrimination of one's own voice (OV) compared with a familiar voice (FV) using electrophysiological methods. Event-related potentials were recorded while healthy subjects (n = 15) listened passively to oddball sequences composed of recordings of the French vowel /a/ pronounced either by the participant her/himself, or by a familiar person or an unknown person. The results indicated that, although mismatch negativity displayed similar peak latency and amplitude in both conditions, the amplitude of the subsequent P3a was significantly smaller in response to OV compared with a FV. This study therefore indicated that fewer pre-attentional processes are involved in the discrimination of one's OV than in the discrimination of FVs.
Collapse
Affiliation(s)
- Jérôme Graux
- UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France
| | - Marie Gomot
- UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France
| | - Sylvie Roux
- UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France
| | - Frédérique Bonnet-Brilhault
- UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France
| | - Nicole Bruneau
- UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France UMR 930 Imagerie et Cerveau, Inserm, Université François Rabelais de Tours, 37000 Tours, France and CHRU de Tours, 37000 Tours, France
| |
Collapse
|
12
|
The temporal lobes differentiate between the voices of famous and unknown people: an event-related fMRI study on speaker recognition. PLoS One 2012; 7:e47626. [PMID: 23112826 PMCID: PMC3480405 DOI: 10.1371/journal.pone.0047626] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/14/2012] [Indexed: 11/29/2022] Open
Abstract
It is widely accepted that the perception of human voices is supported by neural structures located along the superior temporal sulci. However, there is an ongoing discussion to what extent the activations found in fMRI studies are evoked by the vocal features themselves or are the result of phonetic processing. To show that the temporal lobes are indeed engaged in voice processing, short utterances spoken by famous and unknown people were presented to healthy young participants whose task it was to identify the familiar speakers. In two event-related fMRI experiments, the temporal lobes were found to differentiate between familiar and unfamiliar voices such that named voices elicited higher BOLD signal intensities than unfamiliar voices. Yet, the temporal cortices did not only discriminate between familiar and unfamiliar voices. Experiment 2, which required overtly spoken responses and allowed to distinguish between four familiarity grades, revealed that there was a fine-grained differentiation between all of these familiarity levels with higher familiarity being associated with larger BOLD signal amplitudes. Finally, we observed a gradual response change such that the BOLD signal differences between unfamiliar and highly familiar voices increased with the distance of an area from the transverse temporal gyri, especially towards the anterior temporal cortex and the middle temporal gyri. Therefore, the results suggest that (the anterior and non-superior portions of) the temporal lobes participate in voice-specific processing independent from phonetic components also involved in spoken speech material.
Collapse
|
13
|
Auditory hallucinations: expectation-perception model. Med Hypotheses 2012; 78:802-10. [PMID: 22520337 DOI: 10.1016/j.mehy.2012.03.014] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2012] [Accepted: 03/22/2012] [Indexed: 12/28/2022]
Abstract
In this paper, we aimed to present a hypothesis that would explain the mechanism of auditory hallucinations, one of the main symptoms of schizophrenia. We propose that auditory hallucinations arise from abnormalities in the predictive coding which underlies normal perception, specifically, from the absence or attenuation of prediction error. The suggested deficiencies in processing prediction error could arise from (1) abnormal modulation of thalamus by prefrontal cortex, (2) absence or impaired transmission of external input, (3) dysfunction of the auditory and association cortex, (4) neurotransmitter dysfunction and abnormal connectivity, and (5) hyperactivity activity in auditory cortex and broad prior probability. If there is no prediction error, the initially vague prior probability develops into an explicit percept in the absence of external input, as a result of a recursive pathological exchange between auditory and prefrontal cortex. Unlike existing explanations of auditory hallucinations, we propose concrete mechanisms which underlie the imbalance between perceptual expectation and external input. Impaired processing of prediction error is reflected in reduced mismatch negativity and increased tendency to report non-existing meaningful language stimuli in white noise, shown by those suffering from auditory hallucinations. We believe that the expectation-perception model of auditory hallucinations offers a comprehensive explanation of the underpinnings of auditory hallucinations in both patients and those not diagnosed with mental illness. Therefore, our hypothesis has the potential to fill the gaps in the existing knowledge about this distressing phenomenon and contribute to improved effectiveness of treatments, targeting specific mechanisms.
Collapse
|
14
|
Nucleus lentiformis--a new model for psychiatry? Med Hypotheses 2011; 76:720-2. [PMID: 21367533 DOI: 10.1016/j.mehy.2011.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2011] [Accepted: 02/04/2011] [Indexed: 11/21/2022]
Abstract
In a regions of interest analysis (ROI) of the most frequent psychiatric disorders (schizophrenia, depression, anxiety, addiction), we found the nucleus lentiformis to be the topographical brain region most frequently cited in connection with these disorders in a regions of interest survey of publications between 1990-2010. This structure, which controls particularly motorics, appears to have a much greater importance than has thus far been assumed in the control and modulation of psychiatric disorders. The question of the extent to which this region has its own control function with respect to the disorders should be addressed in further studies along with clarification of possible influence factors on the activity.
Collapse
|