1
|
Sun W, Zou J, Zhu T, Sun Z, Ding N. Linguistic feedback supports rapid adaptation to acoustically degraded speech. iScience 2024; 27:110055. [PMID: 38868204 PMCID: PMC11167482 DOI: 10.1016/j.isci.2024.110055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/15/2024] [Accepted: 05/17/2024] [Indexed: 06/14/2024] Open
Abstract
Humans can quickly adapt to recognize acoustically degraded speech, and here we hypothesize that the quick adaptation is enabled by internal linguistic feedback - Listeners use partially recognized sentences to adapt the mapping between acoustic features and phonetic labels. We test this hypothesis by quantifying how quickly humans adapt to degraded speech and analyzing whether the adaptation process can be simulated by adapting an automatic speech recognition (ASR) system based on its own speech recognition results. We consider three types of acoustic degradation, i.e., noise vocoding, time compression, and local time-reversal. The human speech recognition rate can increase by >20% after exposure to just a few acoustically degraded sentences. Critically, the ASR system with internal linguistic feedback can adapt to degraded speech with human-level speed and accuracy. These results suggest that self-supervised learning based on linguistic feedback is a plausible strategy for human adaptation to acoustically degraded speech.
Collapse
Affiliation(s)
- Wenhui Sun
- Research Center for Life Sciences Computing, Zhejiang Lab, Hangzhou 311121, China
| | - Jiajie Zou
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Tianyi Zhu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Zhoujian Sun
- Research Center for Life Sciences Computing, Zhejiang Lab, Hangzhou 311121, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
2
|
Wang HS, Köhler S, Batterink LJ. Separate but not independent: Behavioral pattern separation and statistical learning are differentially affected by aging. Cognition 2023; 239:105564. [PMID: 37467624 DOI: 10.1016/j.cognition.2023.105564] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 06/23/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023]
Abstract
Our brains are capable of discriminating similar inputs (pattern separation) and rapidly generalizing across inputs (statistical learning). Are these two processes dissociable in behavior? Here, we asked whether cognitive aging affects them in a differential or parallel manner. Older and younger adults were tested on their ability to discriminate between similar trisyllabic words and to extract trisyllabic words embedded in a continuous speech stream. Older adults demonstrated intact statistical learning on an implicit, reaction time-based measure and an explicit, familiarity-based measure of learning. However, they performed poorly in discriminating similar items presented in isolation, both for episodically-encoded items and for statistically-learned regularities. These results indicate that pattern separation and statistical learning are dissociable and differentially affected by aging. The acquisition of implicit representations of statistical regularities operates robustly into old age, whereas pattern separation influences the expression of statistical learning with high representational fidelity and is subject to age-related decline.
Collapse
Affiliation(s)
- Helena Shizhe Wang
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
| | - Stefan Köhler
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada; Department of Psychology, University of Western Ontario, London, Ontario, Canada; Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Laura J Batterink
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada; Department of Psychology, University of Western Ontario, London, Ontario, Canada.
| |
Collapse
|
3
|
Khayr R, Karawani H, Banai K. Implicit learning and individual differences in speech recognition: an exploratory study. Front Psychol 2023; 14:1238823. [PMID: 37744578 PMCID: PMC10513179 DOI: 10.3389/fpsyg.2023.1238823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/22/2023] [Indexed: 09/26/2023] Open
Abstract
Individual differences in speech recognition in challenging listening environments are pronounced. Studies suggest that implicit learning is one variable that may contribute to this variability. Here, we explored the unique contributions of three indices of implicit learning to individual differences in the recognition of challenging speech. To this end, we assessed three indices of implicit learning (perceptual, statistical, and incidental), three types of challenging speech (natural fast, vocoded, and speech in noise), and cognitive factors associated with speech recognition (vocabulary, working memory, and attention) in a group of 51 young adults. Speech recognition was modeled as a function of the cognitive factors and learning, and the unique contribution of each index of learning was statistically isolated. The three indices of learning were uncorrelated. Whereas all indices of learning had unique contributions to the recognition of natural-fast speech, only statistical learning had a unique contribution to the recognition of speech in noise and vocoded speech. These data suggest that although implicit learning may contribute to the recognition of challenging speech, the contribution may depend on the type of speech challenge and on the learning task.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | | | | |
Collapse
|
4
|
Schevenels K, Altvater-Mackensen N, Zink I, De Smedt B, Vandermosten M. Aging effects and feasibility of statistical learning tasks across modalities. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2023; 30:201-230. [PMID: 34823443 DOI: 10.1080/13825585.2021.2007213] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Knowledge on statistical learning (SL) in healthy elderly is scarce. Theoretically, it is not clear whether aging affects modality-specific and/or domain-general learning mechanisms. Practically, there is a lack of research on simplified SL tasks, which would ease the burden of testing in clinical populations. Against this background, we conducted two experiments across three modalities (auditory, visual and visuomotor) in a total of 93 younger and older adults. In Experiment 1, SL was induced in all modalities. Aging effects appeared in the tasks relying on an explicit posttest to assess SL. We hypothesize that declines in domain-general processes that predominantly modulate explicit learning mechanisms underlie these aging effects. In Experiment 2, more feasible tasks were developed for which the level of SL was maintained in all modalities, except the auditory modality. These tasks are more likely to successfully measure SL in elderly (patient) populations in which task demands can be problematic.
Collapse
Affiliation(s)
- Klara Schevenels
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | | | - Inge Zink
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Bert De Smedt
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Maaike Vandermosten
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
5
|
Moberly AC, Varadarajan VV, Tamati TN. Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:365-381. [PMID: 36475738 PMCID: PMC10023188 DOI: 10.1044/2022_jslhr-22-00184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE When listening to speech under adverse conditions, older adults, even with "age-normal" hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demonstrate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compensate using "top-down" cognitive-linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. METHOD Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence materials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capacity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. RESULTS The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recognition. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. CONCLUSIONS Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | | | - Terrin N. Tamati
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
| |
Collapse
|
6
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
7
|
Bieber RE, Gordon-Salant S. Improving older adults' understanding of challenging speech: Auditory training, rapid adaptation and perceptual learning. Hear Res 2021; 402:108054. [PMID: 32826108 PMCID: PMC7880302 DOI: 10.1016/j.heares.2020.108054] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 07/21/2020] [Accepted: 08/02/2020] [Indexed: 12/13/2022]
Abstract
The literature surrounding auditory perceptual learning and auditory training for challenging speech signals in older adult listeners is highly varied, in terms of both study methodology and reported outcomes. In this review, we discuss some of the pertinent features of listener, stimulus, and training protocol. Literature regarding the elicitation of auditory perceptual learning for time-compressed speech, non-native speech, and noise-vocoded speech is reviewed, as are auditory training protocols designed to improve speech-in-noise recognition. The literature is synthesized to establish some over-arching findings for the aging population, including an intact capacity for auditory perceptual learning, but a limited transfer of learning to untrained stimuli.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States.
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States
| |
Collapse
|
8
|
Rotman T, Lavie L, Banai K. Rapid Perceptual Learning: A Potential Source of Individual Differences in Speech Perception Under Adverse Conditions? Trends Hear 2020; 24:2331216520930541. [PMID: 32552477 PMCID: PMC7303778 DOI: 10.1177/2331216520930541] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.
Collapse
Affiliation(s)
- Tali Rotman
- Department of Communication Sciences and Disorders, University of Haifa
| | - Limor Lavie
- Department of Communication Sciences and Disorders, University of Haifa
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa
| |
Collapse
|
9
|
Kennedy-Higgins D, Devlin JT, Adank P. Cognitive mechanisms underpinning successful perception of different speech distortions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2728. [PMID: 32359293 DOI: 10.1121/10.0001160] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception. Eighty-eight participants completed a speeded sentence-verification task with increases in accuracy and reductions in response times used to indicate performance. Audiological and cognitive task measures include pure tone audiometry, speech recognition threshold, working memory, vocabulary knowledge, attention switching, and pattern analysis. Despite previous studies suggesting that temporal and spectral/environmental perception require different lexical or phonological mechanisms, this study shows significant positive correlations in accuracy and response time performance across all distortions. Results of a principal component analysis and multiple linear regressions suggest that a component based on vocabulary knowledge and working memory predicted performance in the speech in quiet, time-compressed and speech in noise conditions. These results suggest that listeners employ a similar cognitive strategy to perceive different temporal and spectral/environmental speech distortions and that this mechanism is supported by vocabulary knowledge and working memory.
Collapse
Affiliation(s)
- Dan Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, United Kingdom
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| |
Collapse
|
10
|
Fletcher A, McAuliffe M, Kerr S, Sinex D. Effects of Vocabulary and Implicit Linguistic Knowledge on Speech Recognition in Adverse Listening Conditions. Am J Audiol 2019; 28:742-755. [PMID: 32271121 DOI: 10.1044/2019_aja-heal18-18-0169] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aims to examine the combined influence of vocabulary knowledge and statistical properties of language on speech recognition in adverse listening conditions. Furthermore, it aims to determine whether any effects identified are more salient at particular levels of signal degradation. Method One hundred three young healthy listeners transcribed phrases presented at 4 different signal-to-noise ratios, which were coded for recognition accuracy. Participants also completed tests of hearing acuity, vocabulary knowledge, nonverbal intelligence, processing speed, and working memory. Results Vocabulary knowledge and working memory demonstrated independent effects on word recognition accuracy when controlling for hearing acuity, nonverbal intelligence, and processing speed. These effects were strongest at the same moderate level of signal degradation. Although listener variables were statistically significant, their effects were subtle in comparison to the influence of word frequency and phonological content. These language-based factors had large effects on word recognition at all signal-to-noise ratios. Discussion Language experience and working memory may have complementary effects on accurate word recognition. However, adequate glimpses of acoustic information appear necessary for speakers to leverage vocabulary knowledge when processing speech in adverse conditions.
Collapse
Affiliation(s)
- Annalise Fletcher
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Megan McAuliffe
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Sarah Kerr
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Donal Sinex
- Department of Speech, Language, and Hearing Science, University of Florida, Gainesville
| |
Collapse
|
11
|
Abstract
The effects of aging and age-related hearing loss on the ability to learn degraded speech are not well understood. This study was designed to compare the perceptual learning of time-compressed speech and its generalization to natural-fast speech across young adults with normal hearing, older adults with normal hearing, and older adults with age-related hearing loss. Early learning (following brief exposure to time-compressed speech) and later learning (following further training) were compared across groups. Age and age-related hearing loss were both associated with declines in early learning. Although the two groups of older adults improved during the training session, when compared to untrained control groups (matched for age and hearing), learning was weaker in older than in young adults. Especially, the transfer of learning to untrained time-compressed sentences was reduced in both groups of older adults. Transfer of learning to natural-fast speech occurred regardless of age and hearing, but it was limited to sentences encountered during training. Findings are discussed within the framework of dynamic models of speech perception and learning. Based on this framework, we tentatively suggest that age-related declines in learning may stem from age differences in the use of high- and low-level speech cues. These age differences result in weaker early learning in older adults, which may further contribute to the difficulty to perceive speech in daily conversational settings in this population.
Collapse
Affiliation(s)
- Maayan Manheim
- 1 Department of Communication Sciences and Disorders, University of Haifa, Israel
| | - Limor Lavie
- 1 Department of Communication Sciences and Disorders, University of Haifa, Israel
| | - Karen Banai
- 1 Department of Communication Sciences and Disorders, University of Haifa, Israel
| |
Collapse
|
12
|
Sengupta P, Burgaleta M, Zamora-López G, Basora A, Sanjuán A, Deco G, Sebastian-Galles N. Traces of statistical learning in the brain's functional connectivity after artificial language exposure. Neuropsychologia 2019; 124:246-253. [PMID: 30521815 DOI: 10.1016/j.neuropsychologia.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 11/27/2018] [Accepted: 12/01/2018] [Indexed: 10/27/2022]
Abstract
Our environment is full of statistical regularities, and we are attuned to learn about these regularities by employing Statistical Learning (SL), a domain-general ability that enables the implicit detection of probabilistic regularities in our surrounding environment. The role of brain connectivity on SL has been previously explored, highlighting the relevance of structural and functional connections between frontal, parietal, and temporal cortices. However, whether SL can induce changes in the functional connections of the resting state brain has yet to be investigated. To address this question, we applied a pre-post design where participants (n = 38) were submitted to resting-state fMRI acquisition before and after in-scanner exposure to either an artificial language stream (formed by 4 concatenated words) or a random audio stream. Our results showed that exposure to an artificial language stream significantly changed (corrected p < 0.05) the functional connectivity between Right Posterior Cingulum and Left Superior Parietal Lobule. This suggests that functional connectivity between brain networks supporting attentional and working memory processes may play an important role in statistical learning.
Collapse
Affiliation(s)
- Pallabi Sengupta
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Miguel Burgaleta
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Gorka Zamora-López
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Anna Basora
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Ana Sanjuán
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Gustavo Deco
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | - Nuria Sebastian-Galles
- Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| |
Collapse
|
13
|
|
14
|
Clayards M. Differences in cue weights for speech perception are correlated for individuals within and across contrasts. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL172. [PMID: 30424660 DOI: 10.1121/1.5052025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 08/10/2018] [Indexed: 06/09/2023]
Abstract
Speech perception requires multiple acoustic cues. Cue weighting may differ across individuals but be systematic within individuals. The current study compared individuals' cue weights within and across contrasts. Forty-two listeners performed a two-alternative forced choice task for four out of five sets of minimal pairs, each varying orthogonally in two dimensions. Individuals' cue weights within a contrast were positively correlated for bet-bat, Luce-lose, and sock-shock, but not for bog-dog and dear-tear. Importantly, individuals' cue weights were also positively correlated across contrasts. This indicates that some individuals are better able to extract and use phonetic information across different dimensions.
Collapse
Affiliation(s)
- Meghan Clayards
- Department of Linguistics and School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| |
Collapse
|
15
|
Colby S, Clayards M, Baum S. The Role of Lexical Status and Individual Differences for Perceptual Learning in Younger and Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1855-1874. [PMID: 30003232 DOI: 10.1044/2018_jslhr-s-17-0392] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 03/22/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE This study examined whether older adults remain perceptually flexible when presented with ambiguities in speech in the absence of lexically disambiguating information. We expected older adults to show less perceptual learning when top-down information was not available. We also investigated whether individual differences in executive function predicted perceptual learning in older and younger adults. METHOD Younger (n = 31) and older adults (n = 27) completed 2 perceptual learning tasks composed of a pretest, exposure, and posttest phase. Both learning tasks exposed participants to clear and ambiguous speech tokens, but crucially, the lexically guided learning task provided disambiguating lexical information whereas the distributional learning task did not. Participants also performed several cognitive tasks to investigate individual differences in working memory, vocabulary, and attention-switching control. RESULTS We found that perceptual learning is maintained in older adults, but that learning may be stronger in contexts where top-down information is available. Receptive vocabulary scores predicted learning across both age groups and in both learning tasks. CONCLUSIONS Implicit learning is maintained with age across different learning conditions but remains stronger when lexically biasing information is available. We find that receptive vocabulary is relevant for learning in both types of learning tasks, suggesting the importance of vocabulary knowledge for adapting to ambiguities in speech.
Collapse
Affiliation(s)
- Sarah Colby
- School of Communication Sciences & Disorders, McGill University, Montréal, Québec, Canada
| | - Meghan Clayards
- School of Communication Sciences & Disorders, McGill University, Montréal, Québec, Canada
- Department of Linguistics, McGill University, Montréal, Québec, Canada
| | - Shari Baum
- School of Communication Sciences & Disorders, McGill University, Montréal, Québec, Canada
| |
Collapse
|
16
|
Drozdova P, van Hout R, Scharenborg O. L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3058. [PMID: 29195438 DOI: 10.1121/1.5010169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
Collapse
Affiliation(s)
- Polina Drozdova
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| | - Roeland van Hout
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| | - Odette Scharenborg
- Centre for Language Studies, Radboud University Nijmegen, Erasmusplein 1, P.O. Box 9103, 6500 HD Nijmegen, the Netherlands
| |
Collapse
|
17
|
Rosemann S, Gießing C, Özyurt J, Carroll R, Puschmann S, Thiel CM. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults. Front Hum Neurosci 2017. [PMID: 28638329 PMCID: PMC5461255 DOI: 10.3389/fnhum.2017.00294] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Carsten Gießing
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Rebecca Carroll
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Institute of Dutch Studies, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Sebastian Puschmann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany
| |
Collapse
|
18
|
Heyselaar E, Segaert K, Walvoort SJW, Kessels RPC, Hagoort P. The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia 2017; 101:97-105. [PMID: 28465069 DOI: 10.1016/j.neuropsychologia.2017.04.033] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2016] [Revised: 04/24/2017] [Accepted: 04/26/2017] [Indexed: 01/01/2023]
Abstract
Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
Collapse
Affiliation(s)
- Evelien Heyselaar
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Katrien Segaert
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Serge J W Walvoort
- Vincent van Gogh Institute for Psychiatry, Centre of Excellence for Korsakoff and Alcohol-Related Cognitive Disorders, Venray, The Netherlands
| | - Roy P C Kessels
- Vincent van Gogh Institute for Psychiatry, Centre of Excellence for Korsakoff and Alcohol-Related Cognitive Disorders, Venray, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands; Department of Medical Psychology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Peter Hagoort
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
19
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|
20
|
Banai K, Lavner Y. The effects of exposure and training on the perception of time-compressed speech in native versus nonnative listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1686. [PMID: 27914374 DOI: 10.1121/1.4962499] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The present study investigated the effects of language experience on the perceptual learning induced by either brief exposure to or more intensive training with time-compressed speech. Native (n = 30) and nonnative (n = 30) listeners were each divided to three groups with different experiences with time-compressed speech: A trained group who trained on the semantic verification of time-compressed sentences for three sessions, an exposure group briefly exposed to 20 time-compressed sentences, and a group of naive listeners. Recognition was assessed with three sets of time-compressed sentences intended to evaluate exposure-induced and training-induced learning as well as across-token and across-talker generalization. Learning profiles differed between native and nonnative listeners. Exposure had a weaker effect in nonnative than in native listeners. Furthermore, native and nonnative trained listeners significantly outperformed their untrained counterparts when tested with sentences taken from the training set. However, only trained native listeners outperformed naive native listeners when tested with new sentences. These findings suggest that the perceptual learning of speech is sensitive to linguistic experience. That rapid learning is weaker in nonnative listeners is consistent with their difficulties in real-life conditions. Furthermore, nonnative listeners may require longer periods of practice to achieve native-like learning outcomes.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Mt. Carmel, Haifa 34988, Israel
| | - Yizhar Lavner
- Department of Computer Science, Tel-Hai College, Tel-Hai 12208, Israel
| |
Collapse
|
21
|
Schwab JF, Schuler KD, Stillman CM, Newport EL, Howard JH, Howard DV. Aging and the statistical learning of grammatical form classes. Psychol Aging 2016; 31:481-7. [PMID: 27294711 PMCID: PMC4980253 DOI: 10.1037/pag0000110] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Language learners must place unfamiliar words into categories, often with few explicit indicators about when and how that word can be used grammatically. Reeder, Newport, and Aslin (2013) showed that college students can learn grammatical form classes from an artificial language by relying solely on distributional information (i.e., contextual cues in the input). Here, 2 experiments revealed that healthy older adults also show such statistical learning, though they are poorer than young at distinguishing grammatical from ungrammatical strings. This finding expands knowledge of which aspects of learning vary with aging, with potential implications for second language learning in late adulthood. (PsycINFO Database Record
Collapse
|
22
|
Carroll R, Warzybok A, Kollmeier B, Ruigendijk E. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise. Front Psychol 2016; 7:990. [PMID: 27458400 PMCID: PMC4930932 DOI: 10.3389/fpsyg.2016.00990] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 06/16/2016] [Indexed: 11/21/2022] Open
Abstract
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Collapse
Affiliation(s)
- Rebecca Carroll
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| | - Anna Warzybok
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Birger Kollmeier
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Esther Ruigendijk
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| |
Collapse
|
23
|
Deschamps I, Hasson U, Tremblay P. The Structural Correlates of Statistical Information Processing during Speech Perception. PLoS One 2016; 11:e0149375. [PMID: 26919234 PMCID: PMC4771024 DOI: 10.1371/journal.pone.0149375] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 02/01/2016] [Indexed: 11/30/2022] Open
Abstract
The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Département de Réadaptation, Université Laval, Québec City, QC, Canada
- Centre de Recherche de l’Institut Universitaire en santé mentale de Québec, Québec City, QC, Canada
| | - Uri Hasson
- Center for Mind & Brain Sciences (CIMeC), University of Trento, Mattarello (TN), Italy
| | - Pascale Tremblay
- Département de Réadaptation, Université Laval, Québec City, QC, Canada
- Centre de Recherche de l’Institut Universitaire en santé mentale de Québec, Québec City, QC, Canada
| |
Collapse
|
24
|
Banks B, Gowen E, Munro KJ, Adank P. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation. Front Hum Neurosci 2015; 9:422. [PMID: 26283946 PMCID: PMC4522556 DOI: 10.3389/fnhum.2015.00422] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 07/10/2015] [Indexed: 11/25/2022] Open
Abstract
Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.
Collapse
Affiliation(s)
- Briony Banks
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Emma Gowen
- Faculty of Life Sciences, University of ManchesterManchester, UK
| | - Kevin J. Munro
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Patti Adank
- Speech, Hearing and Phonetic Sciences, University College LondonLondon, UK
| |
Collapse
|
25
|
Adank P, McGettigan C, Kotz SAE. Editorial: Current research and emerging directions on the cognitive and neural organization of speech processing. Front Hum Neurosci 2015; 9:305. [PMID: 26074806 PMCID: PMC4444830 DOI: 10.3389/fnhum.2015.00305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 05/12/2015] [Indexed: 12/02/2022] Open
Affiliation(s)
- Patti Adank
- Division of Psychology and Language Sciences, Speech, Hearing and Phonetic Sciences, University College London London, UK
| | | | - Sonja A E Kotz
- Max Planck Institute Leipzig Leipzig, Germany ; School of Psychological Sciences, University of Manchester Manchester, UK
| |
Collapse
|