1
|
Meredith Weiss S, Aydin E, Lloyd-Fox S, Johnson MH. Trajectories of brain and behaviour development in the womb, at birth and through infancy. Nat Hum Behav 2024; 8:1251-1262. [PMID: 38886534 DOI: 10.1038/s41562-024-01896-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 04/04/2024] [Indexed: 06/20/2024]
Abstract
Birth is often seen as the starting point for studying effects of the environment on human development, with much research focused on the capacities of young infants. However, recent imaging advances have revealed that the complex behaviours of the fetus and the uterine environment exert influence. Birth is now viewed as a punctuate event along a developmental pathway of increasing autonomy of the child from their mother. Here we highlight (1) increasing physiological autonomy and perceptual sensitivity in the fetus, (2) physiological and neurochemical processes associated with birth that influence future behaviour, (3) the recalibration of motor and sensory systems in the newborn to adapt to the world outside the womb and (4) the effect of the prenatal environment on later infant behaviours and brain function. Taken together, these lines of evidence move us beyond nature-nurture issues to a developmental human lifespan view beginning within the womb.
Collapse
Affiliation(s)
- Staci Meredith Weiss
- University of Cambridge, Department of Psychology, Cambridge, UK.
- University of Roehampton, School of Psychology, London, UK.
| | - Ezra Aydin
- University of Cambridge, Department of Psychology, Cambridge, UK
- Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
| | - Sarah Lloyd-Fox
- University of Cambridge, Department of Psychology, Cambridge, UK
| | - Mark H Johnson
- University of Cambridge, Department of Psychology, Cambridge, UK
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| |
Collapse
|
2
|
Endevelt-Shapira Y, Feldman R. Mother-Infant Brain-to-Brain Synchrony Patterns Reflect Caregiving Profiles. BIOLOGY 2023; 12:biology12020284. [PMID: 36829560 PMCID: PMC9953313 DOI: 10.3390/biology12020284] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023]
Abstract
Biobehavioral synchrony, the coordination of physiological and behavioral signals between mother and infant during social contact, tunes the child's brain to the social world. Probing this mechanism from a two-brain perspective, we examine the associations between patterns of mother-infant inter-brain synchrony and the two well-studied maternal behavioral orientations-sensitivity and intrusiveness-which have repeatedly been shown to predict positive and negative socio-emotional outcomes, respectively. Using dual-electroencephalogram (EEG) recordings, we measure inter-brain connectivity between 60 mothers and their 5- to 12-month-old infants during face-to-face interaction. Thirty inter-brain connections show significantly higher correlations during the real mother-infant face-to-face interaction compared to surrogate data. Brain-behavior correlations indicate that higher maternal sensitivity linked with greater mother-infant neural synchrony, whereas higher maternal intrusiveness is associated with lower inter-brain coordination. Post hoc analysis reveals that the mother-right-frontal-infant-left-temporal connection is particularly sensitive to the mother's sensitive style, while the mother-left-frontal-infant-right-temporal connection indexes the intrusive style. Our results support the perspective that inter-brain synchrony is a mechanism by which mature brains externally regulate immature brains to social living and suggest that one pathway by which sensitivity and intrusiveness exert their long-term effect may relate to the provision of coordinated inputs to the social brain during its sensitive period of maturation.
Collapse
Affiliation(s)
- Yaara Endevelt-Shapira
- Center for Developmental Social Neuroscience, Reichman University, Herzliya 4610101, Israel
- Correspondence: (Y.E.-S.); (R.F.)
| | - Ruth Feldman
- Center for Developmental Social Neuroscience, Reichman University, Herzliya 4610101, Israel
- Child Study Center, Yale University, New Haven, CT 06520, USA
- Correspondence: (Y.E.-S.); (R.F.)
| |
Collapse
|
3
|
Setti W, Cuturi LF, Cocchi E, Gori M. Spatial Memory and Blindness: The Role of Visual Loss on the Exploration and Memorization of Spatialized Sounds. Front Psychol 2022; 13:784188. [PMID: 35686077 PMCID: PMC9171105 DOI: 10.3389/fpsyg.2022.784188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/21/2022] [Indexed: 11/20/2022] Open
Abstract
Spatial memory relies on encoding, storing, and retrieval of knowledge about objects’ positions in their surrounding environment. Blind people have to rely on sensory modalities other than vision to memorize items that are spatially displaced, however, to date, very little is known about the influence of early visual deprivation on a person’s ability to remember and process sound locations. To fill this gap, we tested sighted and congenitally blind adults and adolescents in an audio-spatial memory task inspired by the classical card game “Memory.” In this research, subjects (blind, n = 12; sighted, n = 12) had to find pairs among sounds (i.e., animal calls) displaced on an audio-tactile device composed of loudspeakers covered by tactile sensors. To accomplish this task, participants had to remember the spatialized sounds’ position and develop a proper mental spatial representation of their locations. The test was divided into two experimental conditions of increasing difficulty dependent on the number of sounds to be remembered (8 vs. 24). Results showed that sighted participants outperformed blind participants in both conditions. Findings were discussed considering the crucial role of visual experience in properly manipulating auditory spatial representations, particularly in relation to the ability to explore complex acoustic configurations.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
- *Correspondence: Walter Setti,
| | - Luigi F. Cuturi
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
4
|
Staib M, Frühholz S. Distinct functional levels of human voice processing in the auditory cortex. Cereb Cortex 2022; 33:1170-1185. [PMID: 35348635 PMCID: PMC9930621 DOI: 10.1093/cercor/bhac128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 02/03/2022] [Accepted: 03/07/2022] [Indexed: 11/12/2022] Open
Abstract
Voice signaling is integral to human communication, and a cortical voice area seemed to support the discrimination of voices from other auditory objects. This large cortical voice area in the auditory cortex (AC) was suggested to process voices selectively, but its functional differentiation remained elusive. We used neuroimaging while humans processed voices and nonvoice sounds, and artificial sounds that mimicked certain voice sound features. First and surprisingly, specific auditory cortical voice processing beyond basic acoustic sound analyses is only supported by a very small portion of the originally described voice area in higher-order AC located centrally in superior Te3. Second, besides this core voice processing area, large parts of the remaining voice area in low- and higher-order AC only accessorily process voices and might primarily pick up nonspecific psychoacoustic differences between voices and nonvoices. Third, a specific subfield of low-order AC seems to specifically decode acoustic sound features that are relevant but not exclusive for voice detection. Taken together, the previously defined voice area might have been overestimated since cortical support for human voice processing seems rather restricted. Cortical voice processing also seems to be functionally more diverse and embedded in broader functional principles of the human auditory system.
Collapse
Affiliation(s)
- Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, 8050 Zurich, Switzerland
| | - Sascha Frühholz
- Corresponding author: Department of Psychology, University of Zürich, Binzmuhlestrasse 14/18, 8050 Zürich, Switzerland.
| |
Collapse
|
5
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
6
|
Processing communicative facial and vocal cues in the superior temporal sulcus. Neuroimage 2020; 221:117191. [PMID: 32711066 DOI: 10.1016/j.neuroimage.2020.117191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/20/2022] Open
Abstract
Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
Collapse
|
7
|
Guldner S, Nees F, McGettigan C. Vocomotor and Social Brain Networks Work Together to Express Social Traits in Voices. Cereb Cortex 2020; 30:6004-6020. [PMID: 32577719 DOI: 10.1093/cercor/bhaa175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/08/2020] [Accepted: 05/31/2020] [Indexed: 11/14/2022] Open
Abstract
Voice modulation is important when navigating social interactions-tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Graduate School of Economic and Social Sciences, University of Mannheim, Mannheim 68159, Germany.,Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Frauke Nees
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig Holstein, Kiel University, Kiel 24105, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| |
Collapse
|
8
|
Zachlod D, Rüttgers B, Bludau S, Mohlberg H, Langner R, Zilles K, Amunts K. Four new cytoarchitectonic areas surrounding the primary and early auditory cortex in human brains. Cortex 2020; 128:1-21. [PMID: 32298845 DOI: 10.1016/j.cortex.2020.02.021] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/15/2019] [Accepted: 02/21/2020] [Indexed: 01/01/2023]
Abstract
The architectonical organization of putatively higher auditory areas in the human superior temporal gyrus and sulcus is not yet well understood. To provide a coherent map of this part of the brain, which is involved in language and other functions, we examined the cytoarchitecture and cortical parcellation of this region in histological sections of ten human postmortem brains using an observer-independent mapping approach. Two new areas were identified in the temporo-insular region (areas TeI, TI). TeI is medially adjacent to the primary auditory cortex (area Te1). TI is located between TeI and the insular cortex. Laterally adjacent to previously mapped areas Te2 and Te3, two new areas (STS1, STS2) were identified in the superior temporal sulcus. All four areas were mapped over their whole extent in serial, cell-body stained sections, and their cytoarchitecture was analyzed using quantitative image analysis and multivariate statistics. Interestingly, area TeI, which is located between area Te1 and area TI at the transition to the insula, was more similar in cytoarchitecture to lateral area Te2.1 than to the directly adjacent areas TI and Te1. Such structural similarity of areas medially and laterally to Te1 would be in line with the core-belt-parabelt concept in macaques. The cytoarchitectonic probabilistic maps of all areas show the localization of the areas and their interindividual variability. The new maps are publicly available and provide a basis to further explore structural-functional relationship of the language network in the temporal cortex.
Collapse
Affiliation(s)
- Daniel Zachlod
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany.
| | - Britta Rüttgers
- C. & O. Vogt Institute for Brain Research, University Hospital Düsseldorf, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany
| | - Sebastian Bludau
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Hartmut Mohlberg
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Robert Langner
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany; JARA-BRAIN, Jülich-Aachen Research Alliance, Jülich, Germany
| | - Katrin Amunts
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany; C. & O. Vogt Institute for Brain Research, University Hospital Düsseldorf, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany; JARA-BRAIN, Jülich-Aachen Research Alliance, Jülich, Germany
| |
Collapse
|
9
|
McDonald NM, Perdue KL, Eilbott J, Loyal J, Shic F, Pelphrey KA. Infant brain responses to social sounds: A longitudinal functional near-infrared spectroscopy study. Dev Cogn Neurosci 2019; 36:100638. [PMID: 30889544 PMCID: PMC7033285 DOI: 10.1016/j.dcn.2019.100638] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 02/03/2019] [Accepted: 03/07/2019] [Indexed: 01/24/2023] Open
Abstract
Infants are responsive to and show a preference for human vocalizations from very early in development. While previous studies have provided a strong foundation of understanding regarding areas of the infant brain that respond preferentially to social vs. non-social sounds, how the infant brain responds to sounds of varying social significance over time, and how this relates to behavior, is less well understood. The current study uniquely examined longitudinal brain responses to social sounds of differing social-communicative value in infants at 3 and 6 months of age using functional near-infrared spectroscopy (fNIRS). At 3 months, infants showed similar patterns of widespread activation in bilateral temporal cortices to communicative and non-communicative human non-speech vocalizations, while by 6 months infants showed more similar, and focal, responses to social sounds that carried increased social value (infant-directed speech and human non-speech communicative sounds). In addition, we found that brain activity at 3 months of age related to later brain activity and receptive language abilities as measured at 6 months. These findings suggest areas of consistency and change in auditory social perception between 3 and 6 months of age.
Collapse
Affiliation(s)
- Nicole M McDonald
- Yale Child Study Center, 230 S. Frontage Rd., New Haven, CT, 06520, USA.
| | - Katherine L Perdue
- Division of Developmental Medicine, Boston Children's Hospital, 1 Autumn St., 6th Floor, Boston, MA, USA.
| | - Jeffrey Eilbott
- Yale Child Study Center, 230 S. Frontage Rd., New Haven, CT, 06520, USA.
| | - Jaspreet Loyal
- Children's Hospital, Yale New Haven Hospital, 20 York St., New Haven, CT, 06510, USA.
| | - Frederick Shic
- Yale Child Study Center, 230 S. Frontage Rd., New Haven, CT, 06520, USA.
| | - Kevin A Pelphrey
- Yale Child Study Center, 230 S. Frontage Rd., New Haven, CT, 06520, USA.
| |
Collapse
|
10
|
Jesse A, Bartoli M. Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition 2018; 176:195-208. [DOI: 10.1016/j.cognition.2018.03.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 03/13/2018] [Accepted: 03/21/2018] [Indexed: 11/25/2022]
|
11
|
Fritz T, Mueller K, Guha A, Gouws A, Levita L, Andrews TJ, Slocombe KE. Human behavioural discrimination of human, chimpanzee and macaque affective vocalisations is reflected by the neural response in the superior temporal sulcus. Neuropsychologia 2018; 111:145-150. [PMID: 29366950 DOI: 10.1016/j.neuropsychologia.2018.01.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 01/12/2018] [Accepted: 01/18/2018] [Indexed: 10/18/2022]
Abstract
Accurate perception of the emotional content of vocalisations is essential for successful social communication and interaction. However, it is not clear whether our ability to perceive emotional cues from vocal signals is specific to human signals, or can be applied to other species' vocalisations. Here, we address this issue by evaluating the perception and neural response to affective vocalisations from different primate species (humans, chimpanzees and macaques). We found that the ability of human participants to discriminate emotional valence varied as a function of phylogenetic distance between species. Participants were most accurate at discriminating the emotional valence of human vocalisations, followed by chimpanzee vocalisations. They were, however, unable to accurately discriminate the valence of macaque vocalisations. Next, we used fMRI to compare human brain responses to human, chimpanzee and macaque vocalisations. We found that regions in the superior temporal lobe that are closely associated with the perception of complex auditory signals, showed a graded response to affective vocalisations from different species with the largest response to human vocalisations, an intermediate response to chimpanzees, and the smallest response to macaques. Together, these results suggest that neural correlates of differences in the perception of different primate affective vocalisations are found in auditory regions of the human brain and correspond to the phylogenetic distances between the species.
Collapse
Affiliation(s)
- Thomas Fritz
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany; Institute for Psychoacoustics and Electronic Music (IPEM), University of Ghent, Belgium
| | - Karsten Mueller
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany
| | - Anika Guha
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany
| | - Andre Gouws
- Department of Psychology, University of York, UK
| | - Liat Levita
- Department of Psychology, University of Sheffield, UK
| | | | | |
Collapse
|
12
|
McGettigan C, Jasmin K, Eisner F, Agnew ZK, Josephs OJ, Calder AJ, Jessop R, Lawson RP, Spielmann M, Scott SK. You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech. Neuropsychologia 2017; 100:51-63. [PMID: 28400328 PMCID: PMC5446325 DOI: 10.1016/j.neuropsychologia.2017.04.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 04/05/2017] [Accepted: 04/07/2017] [Indexed: 11/13/2022]
Abstract
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes’ responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze – further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task. Talker gaze is an important social cue during speech comprehension. Neural responses to gaze were measured during perception of degraded sentences. Gaze direction modulated activation in left-lateralized superior temporal cortex. Left lateralization became stronger when speech was less intelligible. Results suggest task-dependent flexibility in cortical responses to gaze.
Collapse
Affiliation(s)
- Carolyn McGettigan
- Department of Psychology, Royal Holloway University of London, Egham Hill, Egham TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK.
| | - Kyle Jasmin
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Frank Eisner
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Donders Institute, Radboud University, Montessorilaan 3, 6525 HR Nijmegen, Netherlands
| | - Zarinah K Agnew
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Department of Otolaryngology, University of California, San Francisco, 513 Parnassus Avenue, San Francisco, CA, USA
| | - Oliver J Josephs
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Andrew J Calder
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Rosemary Jessop
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Rebecca P Lawson
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Mona Spielmann
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| |
Collapse
|
13
|
Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus. J Neurosci 2017; 37:2697-2708. [PMID: 28179553 DOI: 10.1523/jneurosci.2914-16.2017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Revised: 01/17/2017] [Accepted: 01/24/2017] [Indexed: 11/21/2022] Open
Abstract
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices.
Collapse
|
14
|
Neural correlates of the affective properties of spontaneous and volitional laughter types. Neuropsychologia 2016; 95:30-39. [PMID: 27940151 DOI: 10.1016/j.neuropsychologia.2016.12.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 12/06/2016] [Accepted: 12/07/2016] [Indexed: 11/23/2022]
Abstract
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative.
Collapse
|
15
|
Axelrod V. On the domain-specificity of the visual and non-visual face-selective regions. Eur J Neurosci 2016; 44:2049-63. [PMID: 27255921 DOI: 10.1111/ejn.13290] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 05/21/2016] [Accepted: 05/24/2016] [Indexed: 11/27/2022]
Abstract
What happens in our brains when we see a face? The neural mechanisms of face processing - namely, the face-selective regions - have been extensively explored. Research has traditionally focused on visual cortex face-regions; more recently, the role of face-regions outside the visual cortex (i.e., non-visual-cortex face-regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face-regions, and particularly the non-visual-cortex face-regions, process only faces (i.e., face-specific, domain-specific processing) or rather are involved in a more domain-general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face-network during face-unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non-words). We found that the non-visual-cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face-regions, responded significantly stronger to sentences than to non-words. In general, some degree of sentence selectivity was found in all non-visual-cortex cortex. Present result highlights the possibility that the processing in the non-visual-cortex face-selective regions might not be exclusively face-specific, but rather more or even fully domain-general. In this paper, we illustrate how the knowledge about domain-general processing in face-regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general.
Collapse
Affiliation(s)
- Vadim Axelrod
- UCL Institute of Cognitive Neuroscience, University College London, London, UK.,The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, 52900, Israel
| |
Collapse
|
16
|
Wang AT, Lim T, Jamison J, Bush L, Soorya LV, Tavassoli T, Siper PM, Buxbaum JD, Kolevzon A. Neural selectivity for communicative auditory signals in Phelan-McDermid syndrome. J Neurodev Disord 2016; 8:5. [PMID: 26909118 PMCID: PMC4763436 DOI: 10.1186/s11689-016-9138-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 02/03/2016] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Phelan-McDermid syndrome (PMS), a neurodevelopmental disorder caused by deletion or mutation in the SHANK3 gene, is one of the more common single-locus causes of autism spectrum disorder (ASD). PMS is characterized by global developmental delay, hypotonia, delayed or absent speech, increased risk of seizures, and minor dysmorphic features. Impairments in language and communication are one of the most consistent characteristics of PMS. Although there is considerable overlap in the social communicative deficits associated with PMS and ASD, there is a dearth of data on underlying abnormalities at the level of neural systems in PMS. No controlled neuroimaging studies of PMS have been reported to date. The goal of this study was to examine the neural circuitry supporting the perception of auditory communicative signals in children with PMS as compared to idiopathic ASD (iASD). METHODS Eleven children with PMS and nine comparison children with iASD were scanned using functional magnetic resonance imaging (fMRI) under light sedation. The fMRI paradigm was a previously validated passive auditory task, which presented communicative (e.g., speech, sounds of agreement, disgust) and non-communicative vocalizations (e.g., sneezing, coughing, yawning). RESULTS Previous research has shown that the superior temporal gyrus (STG) responds selectively to communicative vocal signals in typically developing children and adults. Here, selective activity for communicative relative to non-communicative vocalizations was detected in the right STG in the PMS group, but not in the iASD group. The PMS group also showed preferential activity for communicative vocalizations in a range of other brain regions associated with social cognition, such as the medial prefrontal cortex (MPFC), insula, and inferior frontal gyrus. Interestingly, better orienting toward social sounds was positively correlated with selective activity in the STG and other "social brain" regions, including the MPFC, in the PMS group. Finally, selective MPFC activity for communicative sounds was associated with receptive language level in the PMS group and expressive language in the iASD group. CONCLUSIONS Despite shared behavioral features, children with PMS differed from children with iASD in their neural response to communicative vocal sounds and showed relative strengths in this area. Furthermore, the relationship between clinical characteristics and neural selectivity also differed between the two groups, suggesting that shared ASD features may partially reflect different neurofunctional abnormalities due to differing etiologies.
Collapse
Affiliation(s)
- A Ting Wang
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Teresa Lim
- Department of Psychiatry, Rouge Valley Health System, Toronto, Canada
| | - Jesslyn Jamison
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Lauren Bush
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL USA
| | | | - Teresa Tavassoli
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Paige M Siper
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Joseph D Buxbaum
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Alexander Kolevzon
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1230, New York, NY 10029 USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Pediatrics, Icahn School of Medicine at Mount Sinai, New York, NY USA
| |
Collapse
|
17
|
Tona R, Naito Y, Moroto S, Yamamoto R, Fujiwara K, Yamazaki H, Shinohara S, Kikuchi M. Audio-visual integration during speech perception in prelingually deafened Japanese children revealed by the McGurk effect. Int J Pediatr Otorhinolaryngol 2015; 79:2072-8. [PMID: 26455920 DOI: 10.1016/j.ijporl.2015.09.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2015] [Revised: 09/08/2015] [Accepted: 09/13/2015] [Indexed: 10/23/2022]
Abstract
OBJECTIVE To investigate the McGurk effect in profoundly deafened Japanese children with cochlear implants (CI) and in normal-hearing children. This was done to identify how children with profound deafness using CI established audiovisual integration during the speech acquisition period. METHODS Twenty-four prelingually deafened children with CI and 12 age-matched normal-hearing children participated in this study. Responses to audiovisual stimuli were compared between deafened and normal-hearing controls. Additionally, responses of the children with CI younger than 6 years of age were compared with those of the children with CI at least 6 years of age at the time of the test. RESULTS Responses to stimuli combining auditory labials and visual non-labials were significantly different between deafened children with CI and normal-hearing controls (p<0.05). Additionally, the McGurk effect tended to be more induced in deafened children older than 6 years of age than in their younger counterparts. CONCLUSIONS The McGurk effect was more significantly induced in prelingually deafened Japanese children with CI than in normal-hearing, age-matched Japanese children. Despite having good speech-perception skills and auditory input through their CI, from early childhood, deafened children may use more visual information in speech perception than normal-hearing children. As children using CI need to communicate based on insufficient speech signals coded by CI, additional activities of higher-order brain function may be necessary to compensate for the incomplete auditory input. This study provided information on the influence of deafness on the development of audiovisual integration related to speech, which could contribute to our further understanding of the strategies used in spoken language communication by prelingually deafened children.
Collapse
Affiliation(s)
- Risa Tona
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan; Department of Otolaryngology, Institute of Biomedical Research and Innovation, Kobe, Japan; Department of Otolaryngology, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yasushi Naito
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan; Department of Otolaryngology, Institute of Biomedical Research and Innovation, Kobe, Japan.
| | - Saburo Moroto
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan; Department of Otolaryngology, Institute of Biomedical Research and Innovation, Kobe, Japan
| | - Rinko Yamamoto
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Keizo Fujiwara
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan; Department of Otolaryngology, Institute of Biomedical Research and Innovation, Kobe, Japan
| | - Hiroshi Yamazaki
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan; Department of Otolaryngology, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Shogo Shinohara
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Masahiro Kikuchi
- Department of Otolaryngology, Kobe City Medical Center General Hospital, Kobe, Japan
| |
Collapse
|
18
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
19
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
20
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
21
|
Raschle NM, Smith SA, Zuk J, Dauvermann MR, Figuccio MJ, Gaab N. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children. PLoS One 2014; 9:e115549. [PMID: 25532132 PMCID: PMC4274095 DOI: 10.1371/journal.pone.0115549] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Accepted: 11/24/2014] [Indexed: 02/06/2023] Open
Abstract
Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age = 5.8 y; range 5.2-6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.
Collapse
Affiliation(s)
- Nora Maria Raschle
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Psychiatric University Clinics Basel, Department of Child and Adolescent Psychiatry, Basel, Switzerland
| | - Sara Ashley Smith
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
| | - Jennifer Zuk
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Maria Regina Dauvermann
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Michael Joseph Figuccio
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
| | - Nadine Gaab
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Harvard Graduate School of Education, Cambridge, Massachusetts, United States of America
| |
Collapse
|
22
|
Chi RP, Snyder AW. Treating autism by targeting the temporal lobes. Med Hypotheses 2014; 83:614-8. [PMID: 25227333 DOI: 10.1016/j.mehy.2014.08.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Revised: 07/19/2014] [Accepted: 08/05/2014] [Indexed: 12/21/2022]
Abstract
Compelling new findings suggest that an early core signature of autism is a deficient left anterior temporal lobe response to language and an atypical over-activation of the right anterior temporal lobe. Intriguingly, our recent results from an entirely different line of reasoning and experiments also show that applying cathodal stimulation (suppressing) at the left anterior temporal lobe together with anodal stimulation (facilitating) at the right anterior temporal lobe, by transcranial direct current stimulation (tDCS), can induce some autistic-like cognitive abilities in otherwise normal adults. If we could briefly induce autistic like cognitive abilities in healthy individuals, it follows that we might be able to mitigate some autistic traits by reversing the above stimulation protocol, in an attempt to restore the typical dominance of the left anterior temporal lobe. Accordingly, we hypothesize that at least some autistic traits can be mitigated, by applying anodal stimulation (facilitating) at the left anterior temporal lobe together with cathodal stimulation (suppressing) at the right anterior temporal lobe. Our hypothesis is supported by strong convergent evidence that autistic symptoms can emerge and later reverse due to the onset and subsequent recovery of various temporal lobe (predominantly the left) pathologies. It is also consistent with evidence that the temporal lobes (especially the left) are a conceptual hub, critical for extracting meaning from lower level sensory information to form a coherent representation, and that a deficit in the temporal lobes underlies autistic traits.
Collapse
Affiliation(s)
| | - Allan W Snyder
- Sydney Medical School, Medical Foundation Building (K25), The University of Sydney, NSW 2006, Australia.
| |
Collapse
|
23
|
Varvatsoulias G. Voice-Sensitive Areas in the Brain: A Single Participant Study Coupled With Brief Evolutionary Psychological Considerations. PSYCHOLOGICAL THOUGHT 2014. [DOI: 10.5964/psyct.v7i1.98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
24
|
Bennett RH, Bolling DZ, Anderson LC, Pelphrey KA, Kaiser MD. fNIRS detects temporal lobe response to affective touch. Soc Cogn Affect Neurosci 2014; 9:470-6. [PMID: 23327935 PMCID: PMC3989128 DOI: 10.1093/scan/nst008] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2012] [Accepted: 01/09/2013] [Indexed: 11/14/2022] Open
Abstract
Touch plays a crucial role in social-emotional development. Slow, gentle touch applied to hairy skin is processed by C-tactile (CT) nerve fibers. Furthermore, 'social brain' regions, such as the posterior superior temporal sulcus (pSTS) have been shown to process CT-targeted touch. Research on the development of these neural mechanisms is scant, yet such knowledge may inform our understanding of the critical role of touch in development and its dysfunction in disorders involving sensory issues, such as autism. The aim of this study was to validate the ability of functional near-infrared spectroscopy (fNIRS), an imaging technique well-suited for use with infants, to measure temporal lobe responses to CT-targeted touch. Healthy adults received brushing to the right forearm (CT) and palm (non-CT) separately, in a block design procedure. We found significant activation in right pSTS and dorsolateral prefrontal cortex to arm > palm touch. In addition, individual differences in autistic traits were related to the magnitude of peak activation within pSTS. These findings demonstrate that fNIRS can detect brain responses to CT-targeted touch and lay the foundation for future work with infant populations that will characterize the development of brain mechanisms for processing CT-targeted touch in typical and atypical populations.
Collapse
Affiliation(s)
- Randi H Bennett
- Yale University, 230 South Frontage Road, New Haven, CT 06520, USA.
| | | | | | | | | |
Collapse
|
25
|
Shultz S, Vouloumanos A, Bennett RH, Pelphrey K. Neural specialization for speech in the first months of life. Dev Sci 2014; 17:766-74. [PMID: 24576182 PMCID: PMC4232861 DOI: 10.1111/desc.12151] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Accepted: 10/08/2013] [Indexed: 11/29/2022]
Abstract
How does the brain’s response to speech change over the first months of life? Although behavioral findings indicate that neonates’ listening biases are sharpened over the first months of life, with a species-specific preference for speech emerging by 3 months, the neural substrates underlying this developmental change are unknown. We examined neural responses to speech compared with biological non-speech sounds in 1- to 4-month-old infants using fMRI. Infants heard speech and biological non-speech sounds, including heterospecific vocalizations and human non-speech. We observed a left-lateralized response in temporal cortex for speech compared to biological non-speech sounds, indicating that this region is highly selective for speech by the first month of life. Specifically, this brain region becomes increasingly selective for speech over the next 3 months as neural substrates become less responsive to non-speech sounds. These results reveal specific changes in neural responses during a developmental period characterized by rapid behavioral changes.
Collapse
|
26
|
Farroni T, Chiarelli AM, Lloyd-Fox S, Massaccesi S, Merla A, Di Gangi V, Mattarello T, Faraguna D, Johnson MH. Infant cortex responds to other humans from shortly after birth. Sci Rep 2013; 3:2851. [PMID: 24092239 PMCID: PMC3790196 DOI: 10.1038/srep02851] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Accepted: 09/05/2013] [Indexed: 11/09/2022] Open
Abstract
A significant feature of the adult human brain is its ability to selectively process information about conspecifics. Much debate has centred on whether this specialization is primarily a result of phylogenetic adaptation, or whether the brain acquires expertise in processing social stimuli as a result of its being born into an intensely social environment. Here we study the haemodynamic response in cortical areas of newborns (1-5 days old) while they passively viewed dynamic human or mechanical action videos. We observed activation selective to a dynamic face stimulus over bilateral posterior temporal cortex, but no activation in response to a moving human arm. This selective activation to the social stimulus correlated with age in hours over the first few days post partum. Thus, even very limited experience of face-to-face interaction with other humans may be sufficient to elicit social stimulus activation of relevant cortical regions.
Collapse
Affiliation(s)
- Teresa Farroni
- 1] Dipartimento di Psicologia dello Sviluppo e della Socializzazione, Università di Padova, Padova, Italy [2] Centre for Brain and Cognitive Development, Birkbeck, University of London, United Kingdom
| | | | | | | | | | | | | | | | | |
Collapse
|
27
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
28
|
Abrams DA, Lynch CJ, Cheng KM, Phillips J, Supekar K, Ryali S, Uddin LQ, Menon V. Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proc Natl Acad Sci U S A 2013; 110:12060-5. [PMID: 23776244 PMCID: PMC3718181 DOI: 10.1073/pnas.1302982110] [Citation(s) in RCA: 166] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Individuals with autism spectrum disorders (ASDs) often show insensitivity to the human voice, a deficit that is thought to play a key role in communication deficits in this population. The social motivation theory of ASD predicts that impaired function of reward and emotional systems impedes children with ASD from actively engaging with speech. Here we explore this theory by investigating distributed brain systems underlying human voice perception in children with ASD. Using resting-state functional MRI data acquired from 20 children with ASD and 19 age- and intelligence quotient-matched typically developing children, we examined intrinsic functional connectivity of voice-selective bilateral posterior superior temporal sulcus (pSTS). Children with ASD showed a striking pattern of underconnectivity between left-hemisphere pSTS and distributed nodes of the dopaminergic reward pathway, including bilateral ventral tegmental areas and nucleus accumbens, left-hemisphere insula, orbitofrontal cortex, and ventromedial prefrontal cortex. Children with ASD also showed underconnectivity between right-hemisphere pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brain regions critical for emotion-related associative learning. The degree of underconnectivity between voice-selective cortex and reward pathways predicted symptom severity for communication deficits in children with ASD. Our results suggest that weak connectivity of voice-selective cortex and brain structures involved in reward and emotion may impair the ability of children with ASD to experience speech as a pleasurable stimulus, thereby impacting language and social skill development in this population. Our study provides support for the social motivation theory of ASD.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Vinod Menon
- Departments of Psychiatry and Behavioral Sciences and
- Neurology and Neurological Sciences
- Program in Neuroscience, and
- Stanford Institute for Neuro-Innovation and Translational Neurosciences, Stanford University School of Medicine, Palo Alto, CA 94304
| |
Collapse
|
29
|
Action-verb processing in Parkinson’s disease: new pathways for motor–language coupling. Brain Struct Funct 2013; 218:1355-73. [DOI: 10.1007/s00429-013-0510-1] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Accepted: 01/23/2013] [Indexed: 10/27/2022]
|