1
|
Adank P, Wang H, Hepworth T, Borrie SA. Perceptual adaptation to dysarthric speech is modulated by concurrent phonological processing: A dual task study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1598-1611. [PMID: 40063084 PMCID: PMC11905114 DOI: 10.1121/10.0035883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 01/30/2025] [Indexed: 03/15/2025]
Abstract
Listeners can adapt to noise-vocoded speech under divided attention using a dual task design [Wang, Chen, Yan, McGettigan, Rosen, and Adank, Trends Hear. 27, 23312165231192297 (2023)]. Adaptation to noise-vocoded speech, an artificial degradation, was largely unaffected for domain-general (visuomotor) and domain-specific (semantic or phonological) dual tasks. The study by Wang et al. was replicated in an online between-subject experiment with 4 conditions (N = 192) using 40 dysarthric sentences, a natural, real-world variation of the speech signal listeners can adapt to, to provide a closer test of the role of attention in adaptation. Participants completed a speech-only task (control) or a dual task, aiming to recruit domain-specific (phonological or lexical) or domain-general (visual) attentional processes. The results showed initial suppression of adaptation in the phonological condition during the first ten trials in addition to poorer overall speech comprehension compared to the speech-only, lexical, and visuomotor conditions. Yet, as there was no difference in the rate of adaptation across the 40 trials for the 4 conditions, it was concluded that perceptual adaptation to dysarthric speech could occur under divided attention, and it seems likely that adaptation is an automatic cognitive process that can occur under load.
Collapse
Affiliation(s)
- Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Han Wang
- Clinical Systems Neuroscience Section, Department of Developmental Neurosciences, Great Ormond Street Institute of Child Health, University College London, London, United Kingdom
- Department of Neurosurgery, Great Ormond Street Hospital for Children, National Health Service (NHS) Foundation Trust, London, United Kingdom
| | - Taylor Hepworth
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| |
Collapse
|
2
|
Wang H, Chen R, Yan Y, McGettigan C, Rosen S, Adank P. Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends Hear 2023; 27:23312165231192297. [PMID: 37547940 PMCID: PMC10408355 DOI: 10.1177/23312165231192297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 08/08/2023] Open
Abstract
Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.
Collapse
Affiliation(s)
- Han Wang
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Rongru Chen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Yu Yan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
3
|
Walden PR, Khayumov J. The Use of Auditory-Perceptual Training as a Research Method: A Summary Review. J Voice 2020; 36:322-334. [PMID: 32747174 DOI: 10.1016/j.jvoice.2020.06.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 06/23/2020] [Accepted: 06/26/2020] [Indexed: 11/15/2022]
Abstract
OBJECTIVE The purpose of this descriptive review was to document the current state of training to perform auditory-perceptual analysis as reported in the voice literature. METHODS A review of the literature was performed. RESULTS Thirty-six articles were included in the review. The theoretical basis of training, specific training methods employed, duration of training, stimuli used to train, vocal qualities trained, and the type of listeners used are reported. CONCLUSION There is wide variation to training procedures used in research including auditory-perceptual evaluation of voice quality. In order to begin to discover how to best train listeners for research and clinical settings, attention to the training methods used in research is necessary. Further, these training methods must be explicitly acknowledged and described to allow for adequate evaluation of research findings, comparison across studies, and to determine for which populations results might be applicable. The conceptual framework outlined in this study is a starting point to review voice quality research and to design future studies for which auditory-perceptual evaluation is taught to listeners.
Collapse
|
4
|
Yellamsetty A, Bidelman GM. Brainstem correlates of concurrent speech identification in adverse listening conditions. Brain Res 2019; 1714:182-192. [PMID: 30796895 DOI: 10.1016/j.brainres.2019.02.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 01/20/2023]
Abstract
When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Department of Communication Sciences & Disorders, University of South Florida, USA.
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
5
|
|
6
|
Moberly AC, Houston DM, Castellanos I. Non-auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants. Laryngoscope Investig Otolaryngol 2016; 1:154-162. [PMID: 28660253 PMCID: PMC5467524 DOI: 10.1002/lio2.38] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2016] [Indexed: 12/28/2022] Open
Abstract
OBJECTIVE Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to determine whether non-auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users. STUDY DESIGN Thirty postlingually deafened adults with CIs and thirty age-matched normal-hearing (NH) controls were enrolled. METHODS Participants were assessed for recognition of words in sentences in noise and several non-auditory measures of neurocognitive function. These non-auditory tasks assessed global intelligence (problem-solving), controlled fluency, working memory, and inhibition-concentration abilities. RESULTS For CI users, faster response times during a non-auditory task of inhibition-concentration predicted better recognition of sentences in noise; however, similar effects were not evident for NH listeners. CONCLUSIONS Findings from this study suggest that inhibition-concentration skills play a role in speech recognition for CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel target for intervention.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology The Ohio State University Wexner Medical Center Columbus Ohio USA
| | - Derek M Houston
- Department of Otolaryngology The Ohio State University Wexner Medical Center Columbus Ohio USA
| | - Irina Castellanos
- Department of Otolaryngology The Ohio State University Wexner Medical Center Columbus Ohio USA
| |
Collapse
|
7
|
Banks B, Gowen E, Munro KJ, Adank P. Cognitive predictors of perceptual adaptation to accented speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2015-2024. [PMID: 25920852 DOI: 10.1121/1.4916265] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The present study investigated the effects of inhibition, vocabulary knowledge, and working memory on perceptual adaptation to accented speech. One hundred young, normal-hearing adults listened to sentences spoken in a constructed, unfamiliar accent presented in speech-shaped background noise. Speech Reception Thresholds (SRTs) corresponding to 50% speech recognition accuracy provided a measurement of adaptation to the accented speech. Stroop, vocabulary knowledge, and working memory tests were performed to measure cognitive ability. Participants adapted to the unfamiliar accent as revealed by a decrease in SRTs over time. Better inhibition (lower Stroop scores) predicted greater and faster adaptation to the unfamiliar accent. Vocabulary knowledge predicted better recognition of the unfamiliar accent, while working memory had a smaller, indirect effect on speech recognition mediated by vocabulary score. Results support a top-down model for successful adaptation to, and recognition of, accented speech; they add to recent theories that allocate a prominent role for executive function to effective speech comprehension in adverse listening conditions.
Collapse
Affiliation(s)
- Briony Banks
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Emma Gowen
- Faculty of Life Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Kevin J Munro
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Patti Adank
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, United Kingdom
| |
Collapse
|
8
|
Amitay S, Zhang YX, Jones PR, Moore DR. Perceptual learning: top to bottom. Vision Res 2013; 99:69-77. [PMID: 24296314 DOI: 10.1016/j.visres.2013.11.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Revised: 11/18/2013] [Accepted: 11/20/2013] [Indexed: 11/30/2022]
Abstract
Perceptual learning has traditionally been portrayed as a bottom-up phenomenon that improves encoding or decoding of the trained stimulus. Cognitive skills such as attention and memory are thought to drive, guide and modulate learning but are, with notable exceptions, not generally considered to undergo changes themselves as a result of training with simple perceptual tasks. Moreover, shifts in threshold are interpreted as shifts in perceptual sensitivity, with no consideration for non-sensory factors (such as response bias) that may contribute to these changes. Accumulating evidence from our own research and others shows that perceptual learning is a conglomeration of effects, with training-induced changes ranging from the lowest (noise reduction in the phase locking of auditory signals) to the highest (working memory capacity) level of processing, and includes contributions from non-sensory factors that affect decision making even on a "simple" auditory task such as frequency discrimination. We discuss our emerging view of learning as a process that increases the signal-to-noise ratio associated with perceptual tasks by tackling noise sources and inefficiencies that cause performance bottlenecks, and present some implications for training populations other than young, smart, attentive and highly-motivated college students.
Collapse
Affiliation(s)
- Sygal Amitay
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - Yu-Xuan Zhang
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - Pete R Jones
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| | - David R Moore
- Medical Research Council Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom.
| |
Collapse
|
9
|
Molloy K, Moore DR, Sohoglu E, Amitay S. Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning. PLoS One 2012; 7:e36929. [PMID: 22606309 PMCID: PMC3351401 DOI: 10.1371/journal.pone.0036929] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Accepted: 04/17/2012] [Indexed: 11/18/2022] Open
Abstract
Background The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. Methodology/Principal Findings We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Conclusions/Significance Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.
Collapse
Affiliation(s)
- Katharine Molloy
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
| | - David R. Moore
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
| | - Ediz Sohoglu
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
| | - Sygal Amitay
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
10
|
Banai K, Amitay S. Stimulus uncertainty in auditory perceptual learning. Vision Res 2012; 61:83-8. [PMID: 22289646 DOI: 10.1016/j.visres.2012.01.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2011] [Revised: 11/19/2011] [Accepted: 01/11/2012] [Indexed: 10/14/2022]
Abstract
Stimulus uncertainty produced by variations in a target stimulus to be detected or discriminated, impedes perceptual learning under some, but not all experimental conditions. To account for those discrepancies, it has been proposed that uncertainty is detrimental to learning when the interleaved stimuli or tasks are similar to each other but not when they are sufficiently distinct, or when it obstructs the downstream search required to gain access to fine-grained sensory information, as suggested by the Reverse Hierarchy Theory (RHT). The focus of the current review is on the effects of uncertainty on the perceptual learning of speech and non-speech auditory signals. Taken together, the findings from the auditory modality suggest that in addition to the accounts already described, uncertainty may contribute to learning when categorization of stimuli to phonological or acoustic categories is involved. Therefore, it appears that the differences reported between the learning of non-speech and speech-related parameters are not an outcome of inherent differences between those two domains, but rather due to the nature of the tasks often associated with those different stimuli.
Collapse
|
11
|
Abstract
The relative contributions of bottom-up versus top-down sensory inputs to auditory learning are not well established. In our experiment, listeners were instructed to perform either a frequency discrimination (FD) task ("FD-train group") or an intensity discrimination (ID) task ("ID-train group") during training on a set of physically identical tones that were impossible to discriminate consistently above chance, allowing us to vary top-down attention whilst keeping bottom-up inputs fixed. A third, control group did not receive any training. Only the FD-train group improved on a FD probe following training, whereas all groups improved on ID following training. However, only the ID-train group also showed changes in performance accuracy as a function of interval with training on the ID task. These findings suggest that top-down, dimension-specific attention can direct auditory learning, even when this learning is not reflected in conventional performance measures of threshold change.
Collapse
|
12
|
Carcagno S, Plack CJ. Subcortical plasticity following perceptual learning in a pitch discrimination task. J Assoc Res Otolaryngol 2010; 12:89-100. [PMID: 20878201 DOI: 10.1007/s10162-010-0236-1] [Citation(s) in RCA: 103] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2010] [Accepted: 09/09/2010] [Indexed: 11/29/2022] Open
Abstract
Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, UK.
| | | |
Collapse
|
13
|
Song JH, Skoe E, Banai K, Kraus N. Perception of speech in noise: neural correlates. J Cogn Neurosci 2010; 23:2268-79. [PMID: 20681749 DOI: 10.1162/jocn.2010.21556] [Citation(s) in RCA: 141] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The presence of irrelevant auditory information (other talkers, environmental noises) presents a major challenge to listening to speech. The fundamental frequency (F(0)) of the target speaker is thought to provide an important cue for the extraction of the speaker's voice from background noise, but little is known about the relationship between speech-in-noise (SIN) perceptual ability and neural encoding of the F(0). Motivated by recent findings that music and language experience enhance brainstem representation of sound, we examined the hypothesis that brainstem encoding of the F(0) is diminished to a greater degree by background noise in people with poorer perceptual abilities in noise. To this end, we measured speech-evoked auditory brainstem responses to /da/ in quiet and two multitalker babble conditions (two-talker and six-talker) in native English-speaking young adults who ranged in their ability to perceive and recall SIN. Listeners who were poorer performers on a standardized SIN measure demonstrated greater susceptibility to the degradative effects of noise on the neural encoding of the F(0). Particularly diminished was their phase-locked activity to the fundamental frequency in the portion of the syllable known to be most vulnerable to perceptual disruption (i.e., the formant transition period). Our findings suggest that the subcortical representation of the F(0) in noise contributes to the perception of speech in noisy conditions.
Collapse
Affiliation(s)
- Judy H Song
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | | | | | | |
Collapse
|
14
|
Adank P, Janse E. Perceptual learning of time-compressed and natural fast speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:2649-59. [PMID: 19894842 DOI: 10.1121/1.3216914] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Speakers vary their speech rate considerably during a conversation, and listeners are able to quickly adapt to these variations in speech rate. Adaptation to fast speech rates is usually measured using artificially time-compressed speech. This study examined adaptation to two types of fast speech: artificially time-compressed speech and natural fast speech. Listeners performed a speeded sentence verification task on three series of sentences: normal-speed sentences, time-compressed sentences, and natural fast sentences. Listeners were divided into two groups to evaluate the possibility of transfer of learning between the time-compressed and natural fast conditions. The first group verified the natural fast before the time-compressed sentences, while the second verified the time-compressed before the natural fast sentences. The results showed transfer of learning when the time-compressed sentences preceded the natural fast sentences, but not when natural fast sentences preceded the time-compressed sentences. The results are discussed in the framework of theories on perceptual learning. Second, listeners show adaptation to the natural fast sentences, but performance for this type of fast speech does not improve to the level of time-compressed sentences.
Collapse
Affiliation(s)
- Patti Adank
- School of Psychological Sciences, University of Manchester, Manchester, M13 9PL, UK.
| | | |
Collapse
|