1
|
Meng Y, Liang C, Chen W, Liu Z, Yang C, Hu J, Gao Z, Gao S. Neural basis of language familiarity effects on voice recognition: An fNIRS study. Cortex 2024; 176:1-10. [PMID: 38723449 DOI: 10.1016/j.cortex.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/18/2024] [Accepted: 04/10/2024] [Indexed: 06/11/2024]
Abstract
Recognizing talkers' identity via speech is an important social skill in interpersonal interaction. Behavioral evidence has shown that listeners can identify better the voices of their native language than those of a non-native language, which is known as the language familiarity effect (LFE). However, its underlying neural mechanisms remain unclear. This study therefore investigated how the LFE occurs at the neural level by employing functional near-infrared spectroscopy (fNIRS). Late unbalanced bilinguals were first asked to learn to associate strangers' voices with their identities and then tested for recognizing the talkers' identities based on their voices speaking a language either highly familiar (i.e., native language Chinese), or moderately familiar (i.e., second language English), or completely unfamiliar (i.e., Ewe) to participants. Participants identified talkers the most accurately in Chinese and the least accurately in Ewe. Talker identification was quicker in Chinese than in English and Ewe but reaction time did not differ between the two non-native languages. At the neural level, recognizing voices speaking Chinese relative to English/Ewe produced less activity in the inferior frontal gyrus, precentral/postcentral gyrus, supramarginal gyrus, and superior temporal sulcus/gyrus while no difference was found between English and Ewe, indicating facilitation of voice identification by the automatic phonological encoding in the native language. These findings shed new light on the interrelations between language ability and voice recognition, revealing that the brain activation pattern of the LFE depends on the automaticity of language processing.
Collapse
Affiliation(s)
- Yuan Meng
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chunyan Liang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; Zhuojin Branch of Yandaojie Primary School, Chengdu, China
| | - Wenjing Chen
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhaoning Liu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoqing Yang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiehui Hu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhao Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| |
Collapse
|
2
|
Flaherty MM, Price R, Murgia S, Manukian E. Can Playing a Game Improve Children's Speech Recognition? A Preliminary Study of Implicit Talker Familiarity Effects. Am J Audiol 2023:1-16. [PMID: 38056473 DOI: 10.1044/2023_aja-23-00156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
PURPOSE The goal was to evaluate whether implicit talker familiarization via an interactive computer game, designed for this study, could improve children's word recognition in classroom noise. It was hypothesized that, regardless of age, children would perform better when recognizing words spoken by the talker who was heard during the game they played. METHOD Using a one-group pretest-posttest experimental design, this study examined the impact of short-term implicit voice exposure on children's word recognition in classroom noise. Implicit voice familiarization occurred via an interactive computer game, played at home for 10 min a day for 5 days. In the game, children (8-12 years) heard one voice, intended to become the "familiar talker." Pre- and postfamiliarization, children identified words in prerecorded classroom noise. Four conditions were tested to evaluate talker familiarity and generalization effects. RESULTS Results demonstrated an 11% improvement when recognizing words spoken by the voice heard in the game ("familiar talker"). This was observed only for words that were heard in the game and did not generalize to unfamiliarized words. Before familiarization, younger children had poorer recognition than older children in all conditions; however, after familiarization, there was no effect of age on performance for familiarized stimuli. CONCLUSIONS Implicit short-term exposure to a talker has the potential to improve children's speech recognition. Therefore, leveraging talker familiarity through gameplay shows promise as a viable method for improving children's speech-in-noise recognition. However, given that improvements did not generalize to unfamiliarized words, careful consideration of exposure stimuli is necessary to optimize this approach.
Collapse
Affiliation(s)
- Mary M Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Rachael Price
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
- Department of Audiology, Children's Hospital of Philadelphia, PA
| | - Silvia Murgia
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Emma Manukian
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| |
Collapse
|
3
|
Baese-Berk MM, Levi SV, Van Engen KJ. Intelligibility as a measure of speech perception: Current approaches, challenges, and recommendations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:68. [PMID: 36732227 DOI: 10.1121/10.0016806] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 12/18/2022] [Indexed: 06/18/2023]
Abstract
Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.
Collapse
Affiliation(s)
| | - Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, New York, New York 10012, USA
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
4
|
Lee JJ, Perrachione TK. Implicit and explicit learning in talker identification. Atten Percept Psychophys 2022; 84:2002-2015. [PMID: 35534783 PMCID: PMC10081569 DOI: 10.3758/s13414-022-02500-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/23/2022] [Indexed: 11/08/2022]
Abstract
In the real world, listeners seem to implicitly learn talkers' vocal identities during interactions that prioritize attending to the content of talkers' speech. In contrast, most laboratory experiments of talker identification employ training paradigms that require listeners to explicitly practice identifying voices. Here, we investigated whether listeners become familiar with talkers' vocal identities during initial exposures that do not involve explicit talker identification. Participants were assigned to one of three exposure tasks, in which they heard identical stimuli but were differentially required to attend to the talkers' vocal identity or to the verbal content of their speech: (1) matching the talker to a concurrent visual cue (talker-matching); (2) discriminating whether the talker was the same as the prior trial (talker 1-back); or (3) discriminating whether speech content matched the previous trial (verbal 1-back). All participants were then tested on their ability to learn to identify talkers from novel speech content. Critically, we manipulated whether the talkers during this post-test differed from those heard during training. Compared to learning to identify novel talkers, listeners were significantly more accurate learning to identify the talkers they had previously been exposed to in the talker-matching and verbal 1-back tasks, but not the talker 1-back task. The correlation between talker identification test performance and exposure task performance was also greater when the talkers were the same in both tasks. These results suggest that listeners learn talkers' vocal identity implicitly during speech perception, even if they are not explicitly attending to the talkers' identity.
Collapse
Affiliation(s)
- Jayden J Lee
- Department of Speech, Language, & Hearing Sciences, Boston University, 635 Commonwealth Ave, Boston, MA, 02215, USA
| | - Tyler K Perrachione
- Department of Speech, Language, & Hearing Sciences, Boston University, 635 Commonwealth Ave, Boston, MA, 02215, USA.
| |
Collapse
|
5
|
Stoop TB, Moriarty PM, Wolf R, Gilmore RO, Perez-Edgar K, Scherf KS, Vigeant MC, Cole PM. I know that voice! Mothers' voices influence children's perceptions of emotional intensity. J Exp Child Psychol 2020; 199:104907. [PMID: 32682101 PMCID: PMC9401094 DOI: 10.1016/j.jecp.2020.104907] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 05/11/2020] [Accepted: 05/11/2020] [Indexed: 11/16/2022]
Abstract
The ability to interpret others' emotions is a critical skill for children's socioemotional functioning. Although research has emphasized facial emotion expressions, children are also constantly required to interpret vocal emotion expressed at or around them by individuals who are both familiar and unfamiliar to them. The current study examined how speaker familiarity, specific emotions, and the acoustic properties that comprise affective prosody influenced children's interpretations of emotional intensity. Participants were 51 7- and 8-year-olds presented with speech stimuli spoken in happy, angry, sad, and nonemotional prosodies by both each child's mother and another child's mother unfamiliar to the target child. Analyses indicated that children rated their own mothers as more intensely emotional compared with the unfamiliar mothers and that this effect was specific to angry and happy prosodies. Furthermore, the acoustic properties predicted children's emotional intensity ratings in different patterns for each emotion. The results are discussed in terms of the significance of the mother's voice in children's development of emotional understanding.
Collapse
Affiliation(s)
- Tawni B Stoop
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA.
| | - Peter M Moriarty
- Acoustics Program, College of Engineering, The Pennsylvania State University, State College, PA 16803, USA
| | - Rachel Wolf
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA
| | - Rick O Gilmore
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA
| | - Koraly Perez-Edgar
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA
| | - K Suzanne Scherf
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA
| | - Michelle C Vigeant
- Acoustics Program, College of Engineering, The Pennsylvania State University, State College, PA 16803, USA
| | - Pamela M Cole
- Department of Psychology, The Pennsylvania State University, State College, PA 16803, USA
| |
Collapse
|
6
|
Levi SV, Harel D, Schwartz RG. Language Ability and the Familiar Talker Advantage: Generalizing to Unfamiliar Talkers Is What Matters. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1427-1436. [PMID: 31021674 PMCID: PMC6808318 DOI: 10.1044/2019_jslhr-l-18-0160] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 10/08/2018] [Accepted: 01/08/2019] [Indexed: 06/09/2023]
Abstract
Purpose Previous studies with children and adults have demonstrated a familiar talker advantage-better word recognition for familiar talkers. The goal of the current study was to test whether this phenomenon is modulated by a child's language ability. Method Sixty children with a range of language ability were trained to learn the voices of 3 foreign-accented, German-English bilingual talkers and received feedback about their performance. Both before and after this talker voice training, children completed a spoken word recognition task in which they heard consonant-vowel-consonant words mixed with noise that were spoken by the 3 familiarized talkers and by 3 unfamiliar German-English bilinguals. Results Two findings emerged from this study: First, children with both higher and lower language ability performed similarly on the familiarized talkers. Second, children with higher language scores performed similarly on both the familiarized and unfamiliar talkers, whereas children with lower language scores performed worse on the unfamiliar talkers compared to familiar talkers, suggesting an inability to generalize to novel, unfamiliar talkers who spoke with a similar accent. Discussion Together, these findings indicate that children with higher language scores are able to generalize knowledge about foreign-accented talkers to help spoken word recognition for novel talkers with the same accent. In contrast, children with lower language skills did not exhibit the same magnitude of generalization. This lack of generalization to similar talkers may mean that children with lower language skills are at a disadvantage in spoken language tasks because they are unable to process speech as well when listening to unfamiliar talkers.
Collapse
Affiliation(s)
- Susannah V. Levi
- Department of Communicative Sciences and Disorders, New York University, New York
| | - Daphna Harel
- PRIISM Applied Statistics Center, Department of Applied Statistics, Social Science, and Humanities, New York University, New York
| | - Richard G. Schwartz
- Program in Speech-Language-Hearing Sciences, Graduate Center, City University of New York, New York
| |
Collapse
|
7
|
Case J, Seyfarth S, Levi SV. Short-term implicit voice-learning leads to a Familiar Talker Advantage: The role of encoding specificity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL497. [PMID: 30599692 PMCID: PMC6279454 DOI: 10.1121/1.5081469] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Revised: 10/30/2018] [Accepted: 11/12/2018] [Indexed: 06/09/2023]
Abstract
Whereas previous research has found that a Familiar Talker Advantage-better spoken language perception for familiar voices-occurs following explicit voice-learning, Case, Seyfarth, and Levi [(2018). J. Speech, Lang., Hear. Res. 61(5), 1251-1260] failed to find this effect after implicit voice-learning. To test whether the advantage is limited to explicit voice-learning, a follow-up experiment evaluated implicit voice-learning under more similar encoding (training) and retrieval (test) conditions. Sentence recognition in noise improved significantly more for familiar than unfamiliar talkers, suggesting that short-term implicit voice-learning can lead to a Familiar Talker Advantage. This paper explores how similarity in encoding and retrieval conditions might affect the acquired processing advantage.
Collapse
Affiliation(s)
- Julie Case
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, 9th floor, New York, New York 10012, USA
| | - Scott Seyfarth
- Department of Linguistics, Ohio State University, 1712 Neil Avenue, Oxley Hall, Columbus, Ohio 43210, USA
| | - Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, 9th floor, New York, New York 10012, USA
| |
Collapse
|
8
|
Levi SV. Methodological considerations for interpreting the Language Familiarity Effect in talker processing. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 10:e1483. [DOI: 10.1002/wcs.1483] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 09/27/2018] [Accepted: 09/28/2018] [Indexed: 11/06/2022]
Affiliation(s)
- Susannah V. Levi
- Department of Communicative Sciences and Disorders New York University New York New York
| |
Collapse
|
9
|
Case J, Seyfarth S, Levi SV. Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1251-1260. [PMID: 29800358 PMCID: PMC6195079 DOI: 10.1044/2018_jslhr-l-17-0298] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 12/02/2017] [Accepted: 01/19/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning. METHOD Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker. RESULTS No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity. DISCUSSION We discuss possible clinical implications of this finding and directions for future research.
Collapse
Affiliation(s)
- Julie Case
- Department of Communicative Sciences and Disorders, New York University, New York
| | - Scott Seyfarth
- Department of Linguistics and Office of Academic Affairs, Ohio State University, Columbus
| | - Susannah V. Levi
- Department of Communicative Sciences and Disorders, New York University, New York
| |
Collapse
|
10
|
Levi S. Another bilingual advantage? Perception of talker-voice information. BILINGUALISM (CAMBRIDGE, ENGLAND) 2018; 21:523-536. [PMID: 29755282 PMCID: PMC5945195 DOI: 10.1017/s1366728917000153] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A bilingual advantage has been found in both cognitive and social tasks. In the current study, we examine whether there is a bilingual advantage in how children process information about who is talking (talker-voice information). Younger and older groups of monolingual and bilingual children completed the following talker-voice tasks with bilingual speakers: a discrimination task in English and German (an unfamiliar language), and a talker-voice learning task in which they learned to identify the voices of three unfamiliar speakers in English. Results revealed effects of age and bilingual status. Across the tasks, older children performed better than younger children and bilingual children performed better than monolingual children. Improved talker-voice processing by the bilingual children suggests that a bilingual advantage exists in a social aspect of speech perception, where the focus is not on processing the linguistic information in the signal, but instead on processing information about who is talking.
Collapse
|
11
|
Drozdova P, van Hout R, Scharenborg O. L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3058. [PMID: 29195438 DOI: 10.1121/1.5010169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
Collapse
Affiliation(s)
- Polina Drozdova
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| | - Roeland van Hout
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| | - Odette Scharenborg
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| |
Collapse
|
12
|
Sidiras C, Iliadou V, Nimatoudis I, Reichenbach T, Bamiou DE. Spoken Word Recognition Enhancement Due to Preceding Synchronized Beats Compared to Unsynchronized or Unrhythmic Beats. Front Neurosci 2017; 11:415. [PMID: 28769752 PMCID: PMC5513984 DOI: 10.3389/fnins.2017.00415] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 07/04/2017] [Indexed: 11/16/2022] Open
Abstract
The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.
Collapse
Affiliation(s)
- Christos Sidiras
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Vasiliki Iliadou
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Ioannis Nimatoudis
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Tobias Reichenbach
- Department of Bioengineering, Imperial College LondonLondon, United Kingdom
| | - Doris-Eva Bamiou
- Faculty of Brain Sciences, UCL Ear Institute, University College LondonLondon, United Kingdom
| |
Collapse
|
13
|
Stevenage SV. Drawing a distinction between familiar and unfamiliar voice processing: A review of neuropsychological, clinical and empirical findings. Neuropsychologia 2017; 116:162-178. [PMID: 28694095 DOI: 10.1016/j.neuropsychologia.2017.07.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 06/04/2017] [Accepted: 07/07/2017] [Indexed: 11/29/2022]
Abstract
Thirty years on from their initial observation that familiar voice recognition is not the same as unfamiliar voice discrimination (van Lancker and Kreiman, 1987), the current paper reviews available evidence in support of a distinction between familiar and unfamiliar voice processing. Here, an extensive review of the literature is provided, drawing on evidence from four domains of interest: the neuropsychological study of healthy individuals, neuropsychological investigation of brain-damaged individuals, the exploration of voice recognition deficits in less commonly studied clinical conditions, and finally empirical data from healthy individuals. All evidence is assessed in terms of its contribution to the question of interest - is familiar voice processing distinct from unfamiliar voice processing. In this regard, the evidence provides compelling support for van Lancker and Kreiman's early observation. Two considerations result: First, the limits of research based on one or other type of voice stimulus are more clearly appreciated. Second, given the demonstration of a distinction between unfamiliar and familiar voice processing, a new wave of research is encouraged which examines the transition involved as a voice is learned.
Collapse
Affiliation(s)
- Sarah V Stevenage
- Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ, UK.
| |
Collapse
|
14
|
Levi SV. Individual differences in learning talker categories: the role of working memory. PHONETICA 2015; 71:201-26. [PMID: 25721393 PMCID: PMC4861173 DOI: 10.1159/000370160] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2014] [Accepted: 11/25/2014] [Indexed: 06/04/2023]
Abstract
The current study explores the question of how an auditory category is learned by having school-age listeners learn to categorize speech not in terms of linguistic categories, but instead in terms of talker categories (i.e., who is talking). Findings from visual-category learning indicate that working memory skills affect learning, but the literature is equivocal: sometimes better working memory is advantageous, and sometimes not. The current study examined the role of different components of working memory to test which component skills benefit, and which hinder, learning talker categories. Results revealed that the short-term storage component positively predicted learning, but that the Central Executive and Episodic Buffer negatively predicted learning. As with visual categories, better working memory is not always an advantage.
Collapse
Affiliation(s)
- Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, New York, N.Y., USA
| |
Collapse
|