1
|
Kuchinsky SE, Razeghi N, Pandža NB. Auditory, Lexical, and Multitasking Demands Interactively Impact Listening Effort. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4066-4082. [PMID: 37672797 PMCID: PMC10713022 DOI: 10.1044/2023_jslhr-22-00548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/12/2023] [Accepted: 06/27/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.
Collapse
Affiliation(s)
- Stefanie E. Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Niki Razeghi
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Nick B. Pandža
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Program in Second Language Acquisition, University of Maryland, College Park
- Maryland Language Science Center, University of Maryland, College Park
| |
Collapse
|
2
|
Abstract
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA; .,Joint Graduate Program in Bioengineering, University of California, Berkeley and San Francisco, California 94720, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
| |
Collapse
|
3
|
Blank IA, Fedorenko E. No evidence for differences among language regions in their temporal receptive windows. Neuroimage 2020; 219:116925. [PMID: 32407994 DOI: 10.1016/j.neuroimage.2020.116925] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/20/2020] [Accepted: 05/06/2020] [Indexed: 10/24/2022] Open
Abstract
The "core language network" consists of left frontal and temporal regions that are selectively engaged in linguistic processing. Whereas functional differences among these regions have long been debated, many accounts propose distinctions in terms of representational grain-size-e.g., words vs. phrases/sentences-or processing time-scale, i.e., operating on local linguistic features vs. larger spans of input. Indeed, the topography of language regions appears to overlap with a cortical hierarchy reported by Lerner et al. (2011) wherein mid-posterior temporal regions are sensitive to low-level features of speech, surrounding areas-to word-level information, and inferior frontal areas-to sentence-level information and beyond. However, the correspondence between the language network and this hierarchy of "temporal receptive windows" (TRWs) is difficult to establish because the precise anatomical locations of language regions vary across individuals. To directly test this correspondence, we first identified language regions in each participant with a well-validated task-based localizer, which confers high functional resolution to the study of TRWs (traditionally based on stereotactic coordinates); then, we characterized regional TRWs with the naturalistic story listening paradigm of Lerner et al. (2011), which augments task-based characterizations of the language network by more closely resembling comprehension "in the wild". We find no region-by-TRW interactions across temporal and inferior frontal regions, which are all sensitive to both word-level and sentence-level information. Therefore, the language network as a whole constitutes a unique stage of information integration within a broader cortical hierarchy.
Collapse
Affiliation(s)
- Idan A Blank
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
4
|
Shain C, Blank IA, van Schijndel M, Schuler W, Fedorenko E. fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia 2020; 138:107307. [PMID: 31874149 PMCID: PMC7140726 DOI: 10.1016/j.neuropsychologia.2019.107307] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 12/02/2019] [Accepted: 12/13/2019] [Indexed: 11/19/2022]
Abstract
Much research in cognitive neuroscience supports prediction as a canonical computation of cognition across domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the brain's response to language is domain-specific, and (2) that these predictions are sensitive both to local word co-occurrence patterns and to hierarchical structure. Using a recently developed continuous-time deconvolutional regression technique that supports data-driven hemodynamic response function discovery from continuous BOLD signal fluctuations in response to naturalistic stimuli, we found effects of prediction measures in the language network but not in the domain-general multiple-demand network, which supports executive control processes and has been previously implicated in language comprehension. Moreover, within the language network, surface-level and structural prediction effects were separable. The predictability effects in the language network were substantial, with the model capturing over 37% of explainable variance on held-out data. These findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.
Collapse
Affiliation(s)
| | - Idan Asher Blank
- University of California Los Angeles, 90024, USA; Massachusetts Institute of Technology, 02139, USA.
| | | | - William Schuler
- The Ohio State University, 43210, USA; Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| | - Evelina Fedorenko
- Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| |
Collapse
|
5
|
Basirat A, Allart É, Brunellière A, Martin Y. Audiovisual speech segmentation in post-stroke aphasia: a pilot study. Top Stroke Rehabil 2019; 26:588-594. [PMID: 31369358 DOI: 10.1080/10749357.2019.1643566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Background: Stroke may cause sentence comprehension disorders. Speech segmentation, i.e. the ability to detect word boundaries while listening to continuous speech, is an initial step allowing the successful identification of words and the accurate understanding of meaning within sentences. It has received little attention in people with post-stroke aphasia (PWA).Objectives: Our goal was to study speech segmentation in PWA and examine the potential benefit of seeing the speakers' articulatory gestures while segmenting sentences.Methods: Fourteen PWA and twelve healthy controls participated in this pilot study. Performance was measured with a word-monitoring task. In the auditory-only modality, participants were presented with auditory-only stimuli while in the audiovisual modality, visual speech cues (i.e. speaker's articulatory gestures) accompanied the auditory input. The proportion of correct responses was calculated for each participant and each modality. Visual enhancement was then calculated in order to estimate the potential benefit of seeing the speaker's articulatory gestures.Results: Both in auditory-only and audiovisual modalities, PWA performed significantly less well than controls, who had 100% correct performance in both modalities. The performance of PWA was correlated with their phonological ability. Six PWA used the visual cues. Group level analysis performed on PWA did not show any reliable difference between the auditory-only and audiovisual modalities (median of visual enhancement = 7% [Q1 - Q3: -5 - 39]).Conclusion: Our findings show that speech segmentation disorder may exist in PWA. This points to the importance of assessing and training speech segmentation after stroke. Further studies should investigate the characteristics of PWA who use visual speech cues during sentence processing.
Collapse
Affiliation(s)
- Anahita Basirat
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | - Étienne Allart
- Neurorehabilitation Unit, Lille University Medical Center, Lille, France.,Inserm U1171, University Lille, Degenerative and Vascular Cognitive Disorders, Lille, France
| | - Angèle Brunellière
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | | |
Collapse
|
6
|
Zhang M, Pratt SR, Doyle PJ, McNeil MR, Durrant JD, Roxberg J, Ortmann A. Audiological Assessment of Word Recognition Skills in Persons With Aphasia. Am J Audiol 2018; 27:1-18. [PMID: 29222555 DOI: 10.1044/2017_aja-17-0041] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/01/2017] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to evaluate the ability of persons with aphasia, with and without hearing loss, to complete a commonly used open-set word recognition test that requires a verbal response. Furthermore, phonotactic probabilities and neighborhood densities of word recognition errors were assessed to explore potential underlying linguistic complexities that might differentially influence performance among groups. METHOD Four groups of adult participants were tested: participants with no brain injury with normal hearing, participants with no brain injury with hearing loss, participants with brain injury with aphasia and normal hearing, and participants with brain injury with aphasia and hearing loss. The Northwestern University Auditory Test No. 6 (NU-6; Tillman & Carhart, 1966) was administered. Those participants who were unable to respond orally (repeating words as heard) were assessed with the Picture Identification Task (Wilson & Antablin, 1980), permitting a picture-pointing response instead. Error patterns from the NU-6 were assessed to determine whether phonotactic probability influenced performance. RESULTS All participants with no brain injury and 72.7% of the participants with aphasia (24 out of 33) completed the NU-6. Furthermore, all participants who were unable to complete the NU-6 were able to complete the Picture Identification Task. There were significant group differences on NU-6 performance. The 2 groups with normal hearing had significantly higher scores than the 2 groups with hearing loss, but the 2 groups with normal hearing and the 2 groups with hearing loss did not differ from one another, implying that their performance was largely determined by hearing loss rather than by brain injury or aphasia. The neighborhood density, but not phonotactic probabilities, of the participants' errors differed across groups with and without aphasia. CONCLUSIONS Because the vast majority of the participants with aphasia examined could be tested readily using an instrument such as the NU-6, clinicians should not be reticent to use this test if patients are able to repeat single words, but routine use of alternative tests is encouraged for populations of people with brain injuries.
Collapse
Affiliation(s)
- Min Zhang
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Sheila R. Pratt
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Patrick J. Doyle
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Malcolm R. McNeil
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - John D. Durrant
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Jillyn Roxberg
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
| | - Amanda Ortmann
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| |
Collapse
|
7
|
Söderström P, Horne M, Mannfolk P, van Westen D, Roll M. Tone-grammar association within words: Concurrent ERP and fMRI show rapid neural pre-activation and involvement of left inferior frontal gyrus in pseudoword processing. BRAIN AND LANGUAGE 2017; 174:119-126. [PMID: 28850882 DOI: 10.1016/j.bandl.2017.08.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Revised: 07/26/2017] [Accepted: 08/16/2017] [Indexed: 06/07/2023]
Abstract
Using a concurrent ERP/fMRI paradigm, we investigated how listeners take advantage of morphologically relevant tonal information at the beginning of words to predict and pre-activate likely word endings. More predictive, low tone word stems gave rise to a 'pre-activation negativity' (PrAN) in the ERPs, a brain potential which has previously been found to increase along with the degree of predictive certainty as regards how a word is going to end. It is suggested that more predictive, low tone stems lead to rapid access to word endings with processing subserved by the left primary auditory cortex as well as the supramarginal gyrus, while high tone stems - which are less predictive - decrease predictive certainty, leading to increased competition between activated word endings, which needs to be resolved by the left inferior frontal gyrus.
Collapse
Affiliation(s)
- Pelle Söderström
- Department of Linguistics, Centre for Languages and Literature, Lund University, Box 201, 221 00 Lund, Sweden.
| | - Merle Horne
- Department of Linguistics, Centre for Languages and Literature, Lund University, Box 201, 221 00 Lund, Sweden.
| | - Peter Mannfolk
- Skane University Hospital, Department of Medical Imaging and Physiology, Lund, Sweden.
| | - Danielle van Westen
- Lund University, Skane University Hospital, Department of Clinical Sciences Lund, Diagnostic Radiology, Lund, Sweden.
| | - Mikael Roll
- Department of Linguistics, Centre for Languages and Literature, Lund University, Box 201, 221 00 Lund, Sweden.
| |
Collapse
|
8
|
Rogers JC, Davis MH. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds. J Cogn Neurosci 2017; 29:919-936. [PMID: 28129061 DOI: 10.1162/jocn_a_01096] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.
Collapse
Affiliation(s)
- Jack C Rogers
- MRC Cognition & Brain Sciences Unit, Cambridge, UK.,University of Birmingham
| | | |
Collapse
|
9
|
Language Mapping Using fMRI and Direct Cortical Stimulation for Brain Tumor Surgery: The Good, the Bad, and the Questionable. Top Magn Reson Imaging 2016; 25:1-10. [PMID: 26848555 DOI: 10.1097/rmr.0000000000000074] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Language functional magnetic resonance imaging for neurosurgical planning is a useful but nuanced technique. Consideration of primary and secondary language anatomy, task selection, and data analysis choices all impact interpretation. In the following chapter, we consider practical considerations and nuances alike for language functional magnetic resonance imaging in the support of and comparison with the neurosurgical gold standard, direct cortical stimulation. Pitfalls and limitations are discussed.
Collapse
|
10
|
Blumstein SE, Amso D. Dynamic Functional Organization of Language: Insights From Functional Neuroimaging. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 8:44-8. [PMID: 25414726 DOI: 10.1177/1745691612469021] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
One of the oldest questions in cognitive science is whether cognitive operations are modular or distributed across domains. We propose that fMRI has made a unique contribution to this question by elucidating the nature of structure-function relations. We focus our discussion on language, which is the classic domain for arguments in favor of domain specificity and a fixed neural architecture. We argue that fMRI has provided evidence for the idea that there is a dynamic functional architecture, rather than a fixed neural architecture, that emerges across the lifespan, pursuant to injury and in response to language experience. We use empirical examples to highlight how fMRI has helped restructure theory by shedding light on how functionally distinct modular components of the grammar can recruit some of the same neural regions, how areas considered to be domain-specific may be recruited in a domain-general fashion, and how language network specialization and left lateralization dynamically emerge in response to experience. fMRI provides a window into neural plasticity and dynamic functional organization not easily afforded by behavior alone.
Collapse
Affiliation(s)
- Sheila E Blumstein
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University
| | - Dima Amso
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University
| |
Collapse
|
11
|
Dien J, Brian ES, Molfese DL, Gold BT. Combined ERP/fMRI evidence for early word recognition effects in the posterior inferior temporal gyrus. Cortex 2013; 49:2307-21. [PMID: 23701693 DOI: 10.1016/j.cortex.2013.03.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2012] [Revised: 01/08/2013] [Accepted: 03/24/2013] [Indexed: 11/26/2022]
Abstract
Two brain regions with established roles in reading are the posterior middle temporal gyrus and the posterior fusiform gyrus (FG). Lesion studies have also suggested that the region located between them, the posterior inferior temporal gyrus (pITG), plays a central role in word recognition. However, these lesion results could reflect disconnection effects since neuroimaging studies have not reported consistent lexicality effects in pITG. Here we tested whether these reported pITG lesion effects are due to disconnection effects or not using parallel Event-related Potentials (ERP)/functional magnetic resonance imaging (fMRI) studies. We predicted that the Recognition Potential (RP), a left-lateralized ERP negativity that peaks at about 200-250 msec, might be the electrophysiological correlate of pITG activity and that conditions that evoke the RP (perceptual degradation) might therefore also evoke pITG activity. In Experiment 1, twenty-three participants performed a lexical decision task (temporally flanked by supraliminal masks) while having high-density 129-channel ERP data collected. In Experiment 2, a separate group of fifteen participants underwent the same task while having fMRI data collected in a 3T scanner. Examination of the ERP data suggested that a canonical RP effect was produced. The strongest corresponding effect in the fMRI data was in the vicinity of the pITG. In addition, results indicated stimulus-dependent functional connectivity between pITG and a region of the posterior FG near the Visual Word Form Area (VWFA) during word compared to nonword processing. These results provide convergent spatiotemporal evidence that the pITG contributes to early lexical access through interaction with the VWFA.
Collapse
Affiliation(s)
- Joseph Dien
- Center for Advanced Study of Language, University of Maryland, College Park, MD, USA; Department of Psychological & Brain Sciences, University of Louisville, Louisville, KY, USA.
| | | | | | | |
Collapse
|
12
|
Vannest J, Szaflarski JP, Eaton KP, Henkel DM, Morita D, Glauser TA, Byars AW, Patel K, Holland SK. Functional magnetic resonance imaging reveals changes in language localization in children with benign childhood epilepsy with centrotemporal spikes. J Child Neurol 2013; 28:435-45. [PMID: 22761402 DOI: 10.1177/0883073812447682] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In children with benign childhood epilepsy with centrotemporal spikes, centrotemporal spikes may cause language dysfunction via disruption of underlying functional neuroanatomy. Fifteen patients with benign childhood epilepsy with centrotemporal spikes and 15 healthy controls completed 3 functional magnetic resonance imaging (MRI) language paradigms; standardized cognitive and language assessments were also performed. For all paradigms, children with benign childhood epilepsy with centrotemporal spikes showed specific regional differences in activation compared to controls. Children with benign childhood epilepsy with centrotemporal spikes also differed from controls on neuropsychological testing. They did not differ in general intelligence, but children with benign childhood epilepsy with centrotemporal spikes scored significantly lower than controls on tests of language, visuomotor integration, and processing speed. These results extend previous findings of lower language and cognitive skills in patients with benign childhood epilepsy with centrotemporal spikes, and suggest epilepsy-related remodeling of language networks that may underlie these observed differences.
Collapse
Affiliation(s)
- Jennifer Vannest
- Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Neural substrates for semantic memory of familiar songs: is there an interface between lyrics and melodies? PLoS One 2012; 7:e46354. [PMID: 23029492 PMCID: PMC3460812 DOI: 10.1371/journal.pone.0046354] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2012] [Accepted: 08/29/2012] [Indexed: 11/23/2022] Open
Abstract
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.
Collapse
|
14
|
Möttönen R, Watkins KE. Using TMS to study the role of the articulatory motor system in speech perception. APHASIOLOGY 2012; 26:1103-1118. [PMID: 22942513 PMCID: PMC3431548 DOI: 10.1080/02687038.2011.619515] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Background: The ability to communicate using speech is a remarkable skill, which requires precise coordination of articulatory movements and decoding of complex acoustic signals. According to the traditional view, speech production and perception rely on motor and auditory brain areas, respectively. However, there is growing evidence that auditory-motor circuits support both speech production and perception.Aims: In this article we provide a review of how transcranial magnetic stimulation (TMS) has been used to investigate the excitability of the motor system during listening to speech and the contribution of the motor system to performance in various speech perception tasks. We also discuss how TMS can be used in combination with brain-imaging techniques to study interactions between motor and auditory systems during speech perception.Main contribution: TMS has proven to be a powerful tool to investigate the role of the articulatory motor system in speech perception.Conclusions: TMS studies have provided support for the view that the motor structures that control the movements of the articulators contribute not only to speech production but also to speech perception.
Collapse
Affiliation(s)
- Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Kate E. Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), University of Oxford, Oxford, UK
| |
Collapse
|
15
|
Tacikowski P, Brechmann A, Nowicka A. Cross-modal pattern of brain activations associated with the processing of self- and significant other's name. Hum Brain Mapp 2012; 34:2069-77. [PMID: 22431327 DOI: 10.1002/hbm.22048] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2011] [Revised: 12/03/2011] [Accepted: 01/02/2012] [Indexed: 11/05/2022] Open
Abstract
Previous neuroimaging studies have shown that the patterns of brain activity during the processing of personally relevant names (e.g., own name, friend's name, partner's name, etc.) and the names of famous people (e.g., celebrities) are different. However, it is not known how the activity in this network is influenced by the modality of the presented stimuli. In this fMRI study, we investigated the pattern of brain activations during the recognition of aurally and visually presented full names of the subject, a significant other, a famous person and unknown individuals. In both modalities, we found that the processing of self-name and the significant other's name was associated with increased activation in the medial prefrontal cortex (MPFC). Acoustic presentations of these names also activated bilateral inferior frontal gyri (IFG). This pattern of results supports the role of MPFC in the processing of personally relevant information, irrespective of their modality.
Collapse
Affiliation(s)
- Pawel Tacikowski
- Nencki Institute of Experimental Biology, Department of Neurophysiology, Laboratory of Psychophysiology, 3 Pasteur St., Warsaw, Poland.
| | | | | |
Collapse
|
16
|
Tacikowski P, Brechmann A, Marchewka A, Jednoróg K, Dobrowolny M, Nowicka A. Is it about the self or the significance? An fMRI study of self-name recognition. Soc Neurosci 2010; 6:98-107. [PMID: 20602286 DOI: 10.1080/17470919.2010.490665] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Our own name, due to its high social relevance, is supposed to have a unique status in our information processing. However, demonstrating this phenomenon empirically proves difficult as famous and unknown names, to which self-name is often compared in the studies, may differ from self-name not only in terms of the 'me vs. not-me' distinction, but also as regards their emotional content and frequency of occurrence in everyday life. In this fMRI study, apart from famous and unknown names we used the names of the most important persons in our subjects' lives. When compared to famous or unknown names recognition, self-name recognition was associated with robust activations in widely distributed bilateral network including fronto-temporal, limbic and subcortical structures, however, when compared to significant other's name, the activations were present specifically in the right inferior frontal gyrus. In addition, the significant other's name produced a similar pattern of activations to the one activated by self-name. These results suggest that the differences between own and other's name processing may rather be quantitative than qualitative in nature.
Collapse
Affiliation(s)
- P Tacikowski
- Nencki Institute of Experimental Biology, Department of Neurophysiology, Laboratory of Psychophysiology, Warsaw, Poland.
| | | | | | | | | | | |
Collapse
|