51
|
Fisher JM, Dick FK, Levy DF, Wilson SM. Neural representation of vowel formants in tonotopic auditory cortex. Neuroimage 2018; 178:574-582. [PMID: 29860083 DOI: 10.1016/j.neuroimage.2018.05.072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/29/2018] [Accepted: 05/30/2018] [Indexed: 11/25/2022] Open
Abstract
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Collapse
Affiliation(s)
- Julia M Fisher
- Department of Linguistics, University of Arizona, Tucson, AZ, USA; Statistics Consulting Laboratory, BIO5 Institute, University of Arizona, Tucson, AZ, USA
| | - Frederic K Dick
- Department of Psychological Sciences, Birkbeck College, University of London, UK; Birkbeck-UCL Center for Neuroimaging, London, UK; Department of Experimental Psychology, University College London, UK
| | - Deborah F Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
52
|
Mukherjee D, Ignatowska-Jankowska BM, Itskovits E, Gonzales BJ, Turm H, Izakson L, Haritan D, Bleistein N, Cohen C, Amit I, Shay T, Grueter B, Zaslaver A, Citri A. Salient experiences are represented by unique transcriptional signatures in the mouse brain. eLife 2018; 7:e31220. [PMID: 29412137 PMCID: PMC5862526 DOI: 10.7554/elife.31220] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 02/05/2018] [Indexed: 12/11/2022] Open
Abstract
It is well established that inducible transcription is essential for the consolidation of salient experiences into long-term memory. However, whether inducible transcription relays information about the identity and affective attributes of the experience being encoded, has not been explored. To this end, we analyzed transcription induced by a variety of rewarding and aversive experiences, across multiple brain regions. Our results describe the existence of robust transcriptional signatures uniquely representing distinct experiences, enabling near-perfect decoding of recent experiences. Furthermore, experiences with shared attributes display commonalities in their transcriptional signatures, exemplified in the representation of valence, habituation and reinforcement. This study introduces the concept of a neural transcriptional code, which represents the encoding of experiences in the mouse brain. This code is comprised of distinct transcriptional signatures that correlate to attributes of the experiences that are being committed to long-term memory.
Collapse
Affiliation(s)
- Diptendu Mukherjee
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | | | - Eyal Itskovits
- Department of Genetics, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
- School of Computer Science and EngineeringThe Hebrew UniversityJerusalemIsrael
| | - Ben Jerry Gonzales
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Hagit Turm
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Liz Izakson
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Doron Haritan
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Noa Bleistein
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Chen Cohen
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Ido Amit
- Department of ImmunologyWeizmann Institute of ScienceRehovotIsrael
| | - Tal Shay
- Department of Life SciencesBen-Gurion University of the NegevBeer-ShevaIsrael
| | - Brad Grueter
- Department of PsychiatryVanderbilt University School of MedicineNashvilleUnited States
| | - Alon Zaslaver
- Department of Genetics, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
| | - Ami Citri
- Department of Biological Chemistry, Silberman Institute of Life SciencesThe Hebrew UniversityJerusalemIsrael
- The Edmond and Lily Safra Center for Brain SciencesThe Hebrew UniversityJerusalemIsrael
- Child and Brain Development ProgramCanadian Institute for Advanced ResearchTorontoCanada
| |
Collapse
|
53
|
Focal versus distributed temporal cortex activity for speech sound category assignment. Proc Natl Acad Sci U S A 2018; 115:E1299-E1308. [PMID: 29363598 PMCID: PMC5819402 DOI: 10.1073/pnas.1714279115] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
When listening to speech, phonemes are represented in a distributed fashion in our temporal and prefrontal cortices. How these representations are selected in a phonemic decision context, and in particular whether distributed or focal neural information is required for explicit phoneme recognition, is unclear. We hypothesized that focal and early neural encoding of acoustic signals is sufficiently informative to access speech sound representations and permit phoneme recognition. We tested this hypothesis by combining a simple speech-phoneme categorization task with univariate and multivariate analyses of fMRI, magnetoencephalography, intracortical, and clinical data. We show that neural information available focally in the temporal cortex prior to decision-related neural activity is specific enough to account for human phonemic identification. Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision.
Collapse
|
54
|
van Atteveldt N, van Kesteren MT, Braams B, Krabbendam L. Neuroimaging of learning and development: improving ecological validity. FRONTLINE LEARNING RESEARCH 2018; 6:186-203. [PMID: 31799220 PMCID: PMC6887532 DOI: 10.14786/flr.v6i3.366] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Modern neuroscience research, including neuroimaging techniques such as functional magnetic resonance imaging (fMRI), has provided valuable insights that advanced our understanding of brain development and learning processes significantly. However, there is a lively discussion about whether and how these insights can be meaningful to the educational practice. One of the main challenges is the low ecological validity of neuroimaging studies, making it hard to translate neuroimaging findings to real-life learning situations. Here, we describe four approaches that increase the ecological validity of neuroimaging experiments: using more naturalistic stimuli and tasks, moving the research to more naturalistic settings by using portable neuroimaging devices, combining tightly controlled lab-based neuroimaging measurements with real-life variables and follow-up field studies, and including stakeholders from the practice at all stages of the research. We illustrate these approaches with examples and explain how these directions of research optimize the benefits of neuroimaging techniques to study learning and development. This paper provides a frontline overview of methodological approaches that can be used for future neuroimaging studies to increase their ecological validity and thereby their relevance and applicability to the learning practice.
Collapse
Affiliation(s)
- Nienke van Atteveldt
- Vrije Universiteit Amsterdam, The Netherlands
- Institute Learn!, Vrije Universiteit Amsterdam, The Netherlands
- Institute for Brain and Behavior Amsterdam (IBBA), The Netherlands
| | - Marlieke T.R. van Kesteren
- Vrije Universiteit Amsterdam, The Netherlands
- Institute Learn!, Vrije Universiteit Amsterdam, The Netherlands
- Institute for Brain and Behavior Amsterdam (IBBA), The Netherlands
| | - Barbara Braams
- Vrije Universiteit Amsterdam, The Netherlands
- Institute Learn!, Vrije Universiteit Amsterdam, The Netherlands
- Institute for Brain and Behavior Amsterdam (IBBA), The Netherlands
| | - Lydia Krabbendam
- Vrije Universiteit Amsterdam, The Netherlands
- Institute Learn!, Vrije Universiteit Amsterdam, The Netherlands
- Institute for Brain and Behavior Amsterdam (IBBA), The Netherlands
| |
Collapse
|
55
|
Rampinini AC, Handjaras G, Leo A, Cecchetti L, Ricciardi E, Marotta G, Pietrini P. Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels. Sci Rep 2017; 7:17029. [PMID: 29208951 PMCID: PMC5717247 DOI: 10.1038/s41598-017-17314-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 11/24/2017] [Indexed: 11/09/2022] Open
Abstract
Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels. In accordance with classical models of language and evidence from functional studies, the inferior frontal and superior temporal cortices discriminated among perceived and produced vowels respectively, also engaging in the non-classical, alternative function - i.e. perception in the inferior frontal and production in the superior temporal cortex. Crucially, though, contiguous and non-overlapping sub-regions within these hubs performed either the classical or non-classical function, the latter also representing non-linguistic sounds (i.e., pure tones). Extending previous results and in line with integration theories, our findings not only demonstrate that sensitivity to speech listening exists in production-related regions and vice versa, but they also suggest that the nature of such interwoven organisation is built upon low-level perception.
Collapse
Affiliation(s)
| | | | - Andrea Leo
- IMT School for Advanced Studies, Lucca, 55100, Italy
| | | | | | - Giovanna Marotta
- Department of Philology, Literature and Linguistics, University of Pisa, Pisa, 56100, Italy
| | | |
Collapse
|
56
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
57
|
De Angelis V, De Martino F, Moerel M, Santoro R, Hausfeld L, Formisano E. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds. Neuroimage 2017; 180:291-300. [PMID: 29146377 DOI: 10.1016/j.neuroimage.2017.11.020] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 10/20/2017] [Accepted: 11/11/2017] [Indexed: 11/30/2022] Open
Abstract
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.
Collapse
Affiliation(s)
- Vittoria De Angelis
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands; Center for Magnetic Resonance Research, University of Minnesota Medical School, 2021 Sixth Street SE, Minneapolis, MN 55455, United States
| | - Michelle Moerel
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Roberta Santoro
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Lars Hausfeld
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands; Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, The Netherlands.
| |
Collapse
|
58
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
59
|
Reading-induced shifts of perceptual speech representations in auditory cortex. Sci Rep 2017; 7:5143. [PMID: 28698606 PMCID: PMC5506038 DOI: 10.1038/s41598-017-05356-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 05/30/2017] [Indexed: 11/08/2022] Open
Abstract
Learning to read requires the formation of efficient neural associations between written and spoken language. Whether these associations influence the auditory cortical representation of speech remains unknown. Here we address this question by combining multivariate functional MRI analysis and a newly-developed ‘text-based recalibration’ paradigm. In this paradigm, the pairing of visual text and ambiguous speech sounds shifts (i.e. recalibrates) the perceptual interpretation of the ambiguous sounds in subsequent auditory-only trials. We show that it is possible to retrieve the text-induced perceptual interpretation from fMRI activity patterns in the posterior superior temporal cortex. Furthermore, this auditory cortical region showed significant functional connectivity with the inferior parietal lobe (IPL) during the pairing of text with ambiguous speech. Our findings indicate that reading-related audiovisual mappings can adjust the auditory cortical representation of speech in typically reading adults. Additionally, they suggest the involvement of the IPL in audiovisual and/or higher-order perceptual processes leading to this adjustment. When applied in typical and dyslexic readers of different ages, our text-based recalibration paradigm may reveal relevant aspects of perceptual learning and plasticity during successful and failing reading development.
Collapse
|