1
|
McMurray B, Smith FX, Huffman M, Rooff K, Muegge JB, Jeppsen C, Kutlu E, Colby S. Underlying dimensions of real-time word recognition in cochlear implant users. Nat Commun 2024; 15:7382. [PMID: 39209837 PMCID: PMC11362525 DOI: 10.1038/s41467-024-51514-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/08/2024] [Indexed: 09/04/2024] Open
Abstract
Word recognition is a gateway to language, linking sound to meaning. Prior work has characterized its cognitive mechanisms as a form of competition between similar-sounding words. However, it has not identified dimensions along which this competition varies across people. We sought to identify these dimensions in a population of cochlear implant users with heterogenous backgrounds and audiological profiles, and in a lifespan sample of people without hearing loss. Our study characterizes the process of lexical competition using the Visual World Paradigm. A principal component analysis reveals that people's ability to resolve lexical competition varies along three dimensions that mirror prior small-scale studies. These dimensions capture the degree to which lexical access is delayed ("Wait-and-See"), the degree to which competition fully resolves ("Sustained-Activation"), and the overall rate of activation. Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience). Moreover, each dimension predicts outcomes (speech perception in quiet and noise, subjective listening success) over and above auditory fidelity. Higher degrees of Wait-and-See and Sustained-Activation predict poorer outcomes. These results suggest the mechanisms of word recognition vary along a few underlying dimensions which help explain variable performance among listeners encountering auditory challenge.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA.
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA.
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA.
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA.
| | - Francis X Smith
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA
| | - Marissa Huffman
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - Kristin Rooff
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - John B Muegge
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Charlotte Jeppsen
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Ethan Kutlu
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA
| | - Sarah Colby
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
2
|
Zhao M, Wang J. Consistent social information perceived in animated backgrounds improves ensemble perception of facial expressions. Perception 2024; 53:563-578. [PMID: 38725355 DOI: 10.1177/03010066241253073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Observers can rapidly extract the mean emotion from a set of faces with remarkable precision, known as ensemble coding. Previous studies have demonstrated that matched physical backgrounds improve the precision of ongoing ensemble tasks. However, it remains unknown whether this facilitation effect still occurs when matched social information is perceived from the backgrounds. In two experiments, participants decided whether the test face in the retrieving phase appeared more disgusted or neutral than the mean emotion of the face set in the encoding phase. Both phases were paired with task-irrelevant animated backgrounds, which included either the forward movement trajectory carrying the "cooperatively chasing" information, or the backward movement trajectory conveying no such chasing information. The backgrounds in the encoding and retrieving phases were either mismatched (i.e., forward and backward replays of the same trajectory), or matched (i.e., two identical forward movement trajectories in Experiment 1, or two different forward movement trajectories in Experiment 2). Participants in both experiments showed higher ensemble precisions and better discrimination sensitivities when backgrounds matched. The findings suggest that consistent social information perceived from memory-related context exerts a context-matching facilitation effect on ensemble coding and, more importantly, this effect is independent of consistent physical information.
Collapse
Affiliation(s)
- Mengfei Zhao
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Jun Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
3
|
Hendrickson K, Bay K, Combiths P, Foody M, Walker E. Speech Sound Categories Affect Lexical Competition: Implications for Analytic Auditory Training. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1281-1289. [PMID: 38517230 PMCID: PMC11005953 DOI: 10.1044/2024_jslhr-23-00307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 12/13/2023] [Accepted: 02/01/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVES We provide a novel application of psycholinguistic theories and methods to the field of auditory training to provide preliminary data regarding which minimal pair contrasts are more difficult for listeners with typical hearing to distinguish in real-time. DESIGN Using eye-tracking, participants heard a word and selected the corresponding image from a display of four: the target word, two unrelated words, and a word from one of four contrast categories (i.e., voiced-initial [e.g., peach-beach], voiced-final [e.g., back-bag], manner-initial [e.g., talk-sock], and manner-final [e.g., bat-bass]). RESULTS Fixations were monitored to measure how strongly words compete for recognition depending on the contrast type (voicing, manner) and location (word-initial or final). Manner contrasts competed more for recognition than did voicing contrasts, and contrasts that occurred in word-final position were harder to distinguish than word-initial position. CONCLUSION These results are an important initial step toward creating an evidence-based hierarchy for auditory training for individuals who use cochlear implants.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences and Disorder, The University of Iowa, Iowa City
| | - Katlyn Bay
- Department of Communication Sciences and Disorder, The University of Iowa, Iowa City
| | - Philip Combiths
- Department of Communication Sciences and Disorder, The University of Iowa, Iowa City
| | - Meaghan Foody
- Department of Communication Sciences and Disorder, The University of Iowa, Iowa City
| | - Elizabeth Walker
- Department of Communication Sciences and Disorder, The University of Iowa, Iowa City
| |
Collapse
|
4
|
Zheng Z, Wang J. Interpersonal prior information informs ensemble coding through the co-representation process. Psychon Bull Rev 2024; 31:886-896. [PMID: 37783900 DOI: 10.3758/s13423-023-02390-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2023] [Indexed: 10/04/2023]
Abstract
Humans have the ability to rapidly extract summary statistics from object groupings through a specific capability known as ensemble coding. Previous literature has reported that this ability can become biased by prior perceptual experiences at the individual level. However, it remains unknown whether interpersonal prior information could also bias ensemble perception through a co-representation process. Experiment 1 found that participants' summary estimations were biased toward their co-actor's stimuli. Experiment 2 confirmed a causal relationship between the bias effect and the co-representation process by showing a reduction in biased estimation after pairing participants with an out-group partner. These findings extend the sources of prior information exploited by humans during perceptual average from individual-level information (i.e., self-tasks) to interpersonal-level information (i.e., co-actor's tasks). More specifically, interpersonal prior information is shown to act in a top-down and implicit manner, biasing ensemble perception.
Collapse
Affiliation(s)
- Zheng Zheng
- School of Psychology, Zhejiang Normal University, Jinhua City, 321004, People's Republic of China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China
| | - Jun Wang
- School of Psychology, Zhejiang Normal University, Jinhua City, 321004, People's Republic of China.
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, China.
| |
Collapse
|
5
|
Colby SE, McMurray B. Efficiency of spoken word recognition slows across the adult lifespan. Cognition 2023; 240:105588. [PMID: 37586157 PMCID: PMC10530619 DOI: 10.1016/j.cognition.2023.105588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Spoken word recognition is a critical hub during language processing, linking hearing and perception to meaning and syntax. Words must be recognized quickly and efficiently as speech unfolds to be successfully integrated into conversation. This makes word recognition a computationally challenging process even for young, normal hearing adults. Older adults often experience declines in hearing and cognition, which could be linked by age-related declines in the cognitive processes specific to word recognition. However, it is unclear whether changes in word recognition across the lifespan can be accounted for by hearing or domain-general cognition. Participants (N = 107) responded to spoken words in a Visual World Paradigm task while their eyes were tracked to assess the real-time dynamics of word recognition. We examined several indices of word recognition from early adolescence through older adulthood (ages 11-78). The timing and proportion of eye fixations to target and competitor images reveals that spoken word recognition became more efficient through age 25 and began to slow in middle age, accompanied by declines in the ability to resolve competition (e.g., suppressing sandwich to recognize sandal). There was a unique effect of age even after accounting for differences in inhibitory control, processing speed, and hearing thresholds. This suggests a limited age range where listeners are peak performers.
Collapse
Affiliation(s)
- Sarah E Colby
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Wendell Johnson Speech and Hearing Center, Iowa City, IA, 52242, USA; Department of Linguistics, University of Iowa, Phillips Hall, Iowa City, IA 52242, USA
| |
Collapse
|
6
|
Klein KE, Walker EA, McMurray B. Delayed Lexical Access and Cascading Effects on Spreading Semantic Activation During Spoken Word Recognition in Children With Hearing Aids and Cochlear Implants: Evidence From Eye-Tracking. Ear Hear 2023; 44:338-357. [PMID: 36253909 PMCID: PMC9957808 DOI: 10.1097/aud.0000000000001286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
OBJECTIVE The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and spreading semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access. DESIGN Participants were children ages 9- to 12-years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children's fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical access and spreading semantic activation. RESULTS Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort competitor, and increased fixations to the rhyme competitor, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, although this delay was attributable to their delay in activating words in general, not to a distinct semantic source. CONCLUSIONS Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical access directly led to delays in spreading semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children's ability to understand connected speech in everyday life.
Collapse
Affiliation(s)
- Kelsey E Klein
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Elizabeth A Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Psychological and Brain Sciences, Department of Communication Sciences and Disorders, and Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
7
|
Context consistency improves ensemble perception of facial expressions. Psychon Bull Rev 2023; 30:280-290. [PMID: 35882720 DOI: 10.3758/s13423-022-02154-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/14/2022] [Indexed: 11/08/2022]
Abstract
Humans have developed the capacity to rapidly extract summary statistics from the facial expressions of a crowd, such as computing the average facial expression. Although dual-task paradigms involving memory and ensemble tasks have recently found that this ensemble coding ability is biased by visual working memory, few studies have examined whether the context-dependent nature of memory itself can influence the perceptual averaging process. In two experiments, participants made forced-choice judgments about mean facial expressions that were paired with task-irrelevant background images, and the background images either matched or mismatched across encoding and response phases. When the backgrounds matched, it was at either the perceptual level (uniformly oriented lines with the same orientation in encoding and response phases, in Experiment 1), or at the summary statistics level (uniformly oriented lines in the response phase that had the same orientation as the mean of randomly oriented lines that were seen in the encoding phase, in Experiment 2). Participants in Experiment 1 showed a higher ensemble precision and better discrimination sensitivity when the backgrounds matched than when they mismatched, which is consistent with the kind of robust contextual memory effect that has been seen in prior research. We further demonstrated that the context-matching facilitation effect occurred at both the perceptual level (Experiment 1) and at the summary statistics level (Experiment 2). These results demonstrate that the effects of visual working memory on perceptual averaging are obligatory, and they highlight the importance of memory-related context dependency in perceptual averaging.
Collapse
|
8
|
Kim J, Meyer L, Hendrickson K. The Role of Orthography and Phonology in Written Word Recognition: Evidence From Eye-Tracking in the Visual World Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4812-4820. [PMID: 36306510 DOI: 10.1044/2022_jslhr-22-00231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE There is a long-standing debate about how written words are recognized. Central to this debate is the role of phonology. The objective of this study is to contribute to our collective understanding regarding the role of phonology in written word recognition. METHOD A total of 30 monolingual adults were tested using a novel written word version of the visual world paradigm (VWP). We compared activation of phonological anadromes (words that are matched for sounds but not letters, e.g., JAB-BADGE) and orthographic anadromes (words that are matched for letters but not sounds, e.g., LEG-GEL) to determine the relative role of phonology and orthography in familiar single-word reading. RESULTS We found that activation for phonological anadromes is earlier, more robust, and sustained longer than orthographic anadromes. CONCLUSIONS These results are most consistent with strong phonological theories of single-word reading that posit an early and robust role of phonology. This study has broad implications for larger debates regarding reading instruction.
Collapse
Affiliation(s)
- Jina Kim
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Lindsey Meyer
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kristi Hendrickson
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City
| |
Collapse
|
9
|
McMurray B, Sarrett ME, Chiu S, Black AK, Wang A, Canale R, Aslin RN. Decoding the temporal dynamics of spoken word and nonword processing from EEG. Neuroimage 2022; 260:119457. [PMID: 35842096 PMCID: PMC10875705 DOI: 10.1016/j.neuroimage.2022.119457] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 07/02/2022] [Accepted: 07/06/2022] [Indexed: 11/23/2022] Open
Abstract
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec-by-msec basis. This is crucial for understanding the neuroscience of language, and for understanding hearing, language and cognitive disorders in people for whom current behavioral methods are not suitable. We applied machine-learning techniques to standard EEG signals to decode which word was heard on each trial and analyzed the patterns of confusion over time. Results mirrored psycholinguistic findings: Early on, the decoder was equally likely to report the target (e.g., baggage) or a similar sounding competitor (badger), but by around 500 msec, competitors were suppressed. Follow up analyses show that this is robust across EEG systems (gel and saline), with fewer channels, and with fewer trials. Results are robust within individuals and show high reliability. This suggests a powerful and simple paradigm that can assess the neural dynamics of speech decoding, with potential applications for understanding lexical development in a variety of clinical disorders.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological and Brain Sciences, Dept. of Communication Sciences and Disorders, Dept. of Linguistics and Dept. of Otolaryngology, University of Iowa.
| | - McCall E Sarrett
- Interdisciplinary Graduate Program in Neuroscience, Unviersity of Iowa
| | - Samantha Chiu
- Dept. of Psychological and Brain Sciences, University of Iowa
| | - Alexis K Black
- School of Audiology and Speech Sciences, University of British Columbia, Haskins Laboratories
| | - Alice Wang
- Dept. of Psychology, University of Oregon, Haskins Laboratories
| | - Rebecca Canale
- Dept. of Psychological Sciences, University of Connecticut, Haskins Laboratories
| | - Richard N Aslin
- Haskins Laboratories, Department of Psychology and Child Study Center, Yale University, Department of Psychology, University of Connecticut
| |
Collapse
|
10
|
Smith FX, McMurray B. Lexical Access Changes Based on Listener Needs: Real-Time Word Recognition in Continuous Speech in Cochlear Implant Users. Ear Hear 2022; 43:1487-1501. [PMID: 35067570 PMCID: PMC9300769 DOI: 10.1097/aud.0000000000001203] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
OBJECTIVES A key challenge in word recognition is the temporary ambiguity created by the fact that speech unfolds over time. In normal hearing (NH) listeners, this temporary ambiguity is resolved through incremental processing and competition among lexical candidates. Post-lingually deafened cochlear implant (CI) users show similar incremental processing and competition but with slight delays. However, even brief delays could lead to drastic changes when compounded across multiple words in a phrase. This study asks whether words presented in non-informative continuous speech (a carrier phrase) are processed differently than in isolation and whether NH listeners and CI users exhibit different effects of a carrier phrase. DESIGN In a Visual World Paradigm experiment, listeners heard words either in isolation or in non-informative carrier phrases (e.g., "click on the…" ). Listeners selected the picture corresponding to the target word from among four items including the target word (e.g., mustard ), a cohort competitor (e.g., mustache ), a rhyme competitor (e.g., custard ), and an unrelated item (e.g., penguin ). Eye movements were tracked as an index of the relative activation of each lexical candidate as competition unfolds over the course of word recognition. Participants included 21 post-lingually deafened cochlear implant users and 21 NH controls. A replication experiment presented in the Supplemental Digital Content, http://links.lww.com/EANDH/A999 included an additional 22 post-lingually deafened CI users and 18 NH controls. RESULTS Both CI users and the NH controls were accurate at recognizing the words both in continuous speech and in isolation. The time course of lexical activation (indexed by the fixations) differed substantially between groups. CI users were delayed in fixating the target relative to NH controls. Additionally, CI users showed less competition from cohorts than NH controls (even as previous studies have often report increased competition). However, CI users took longer to suppress the cohort and suppressed it less fully than the NH controls. For both CI users and NH controls, embedding words in carrier phrases led to more immediacy in lexical access as observed by increases in cohort competition relative to when words were presented in isolation. However, CI users were not differentially affected by the carriers. CONCLUSIONS Unlike prior work, CI users appeared to exhibit "wait-and-see" profile, in which lexical access is delayed minimizing early competition. However, CI users simultaneously sustained competitor activation late in the trial, possibly to preserve flexibility. This hybrid profile has not been observed previously. When target words are heard in continuous speech, both CI users and NH controls more heavily weight early information. However, CI users (but not NH listeners) also commit less fully to the target, potentially keeping options open if they need to recover from a misperception. This mix of patterns reflects a lexical system that is extremely flexible and adapts to fit the needs of a listener.
Collapse
Affiliation(s)
| | - Bob McMurray
- Dept. of Psychological and Brain Sciences, University of Iowa
- Dept. of Otolaryngology, University of Iowa
| |
Collapse
|
11
|
McMurray B, Apfelbaum KS, Tomblin JB. The Slow Development of Real-Time Processing: Spoken-Word Recognition as a Crucible for New Thinking About Language Acquisition and Language Disorders. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:305-315. [PMID: 37663784 PMCID: PMC10473872 DOI: 10.1177/09637214221078325] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Words are fundamental to language, linking sound, articulation, and spelling to meaning and syntax; and lexical deficits are core to communicative disorders. Work in language acquisition commonly focuses on how lexical knowledge-knowledge of words' sound patterns and meanings-is acquired. But lexical knowledge is insufficient to account for skilled language use. Sophisticated real-time processes must decode the sound pattern of words and interpret them appropriately. We review work that bridges this gap by using sensitive real-time measures (eye tracking in the visual world paradigm) of school-age children's processing of highly familiar words. This work reveals that the development of word recognition skills can be characterized by changes in the rate at which decisions unfold in the lexical system (the activation rate). Moreover, contrary to the standard view that these real-time skills largely develop during infancy and toddlerhood, they develop slowly, at least through adolescence. In contrast, language disorders can be linked to differences in the ultimate degree to which competing interpretations are suppressed (competition resolution), and these differences can be mechanistically linked to deficits in inhibition. These findings have implications for real-world problems such as reading difficulties and second-language acquisition. They suggest that developing accurate, flexible, and efficient processing is just as important a developmental goal as is acquiring language knowledge.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Psychological and Brain Sciences
- Department of Communication Sciences and Disorders
- Department of Linguistics
- DeLTA Center, University of Iowa
| | - Keith S. Apfelbaum
- Department of Psychological and Brain Sciences
- DeLTA Center, University of Iowa
| | - J. Bruce Tomblin
- Department of Communication Sciences and Disorders
- DeLTA Center, University of Iowa
| |
Collapse
|
12
|
Wang Y, Zang X, Zhang H, Shen W. The Processing of the Second Syllable in Recognizing Chinese Disyllabic Spoken Words: Evidence From Eye Tracking. Front Psychol 2021; 12:681337. [PMID: 34777085 PMCID: PMC8580174 DOI: 10.3389/fpsyg.2021.681337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 10/07/2021] [Indexed: 11/13/2022] Open
Abstract
In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., , xiao3zhuan4, "calligraphy" vs. , gong1zhuan4, "revolution") or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., , yuan2zhu4, "cylinder" vs. , gong1zhuan4, "revolution") with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.
Collapse
Affiliation(s)
- Youxi Wang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Xuelian Zang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Hua Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China
| | - Wei Shen
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| |
Collapse
|
13
|
Hendrickson K, Apfelbaum K, Goodwin C, Blomquist C, Klein K, McMurray B. The profile of real-time competition in spoken and written word recognition: More similar than different. Q J Exp Psychol (Hove) 2021; 75:1653-1673. [PMID: 34666573 DOI: 10.1177/17470218211056842] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Word recognition occurs across two sensory modalities: auditory (spoken words) and visual (written words). While each faces different challenges, they are often described in similar terms as a competition process by which multiple lexical candidates are activated and compete for recognition. While there is a general consensus regarding the types of words that compete during spoken word recognition, there is less consensus for written word recognition. The present study develops a novel version of the Visual World Paradigm (VWP) to examine written word recognition and uses this to assess the nature of the competitor set during word recognition in both modalities using the same experimental design. For both spoken and written words, we found evidence for activation of onset competitors (cohorts, e.g., cat, cap) and words that contain the same phonemes or letters in reverse order (anadromes, e.g., cat, tack). We found no evidence of activation for rhymes (e.g., cat, hat). The results across modalities were quite similar, with the exception that for spoken words, cohorts were more active than anadromes, whereas for written words activation was similar. These results suggest a common characterisation of lexical similarity across spoken and written words: temporal or spatial order is coarsely coded, and onsets may receive more weight in both systems. However, for spoken words, temporary ambiguity during the moment of processing gives cohorts an additional boost during real-time recognition.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Keith Apfelbaum
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Claire Goodwin
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA.,University of Iowa Health Network Rehabilitation Hospital, Coralville, IA, USA
| | - Christina Blomquist
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Kelsey Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA.,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA.,Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
14
|
Abstract
OBJECTIVES Whispered speech offers a unique set of challenges to speech perception and word recognition. The goals of the present study were twofold: First, to determine how listeners recognize whispered speech. Second, to inform major theories of spoken word recognition by considering how recognition changes when major cues to phoneme identity are reduced or largely absent compared with normal voiced speech. DESIGN Using eye tracking in the Visual World Paradigm, we examined how listeners recognize whispered speech. After hearing a target word (normal or whispered), participants selected the corresponding image from a display of four-a target (e.g., money), a word that shares sounds with the target at the beginning (cohort competitor, e.g., mother), a word that shares sounds with the target at the end (rhyme competitor, e.g., honey), and a phonologically unrelated word (e.g., whistle). Eye movements to each object were monitored to measure (1) how fast listeners process whispered speech, and (2) how strongly they consider lexical competitors (cohorts and rhymes) as the speech signal unfolds. RESULTS Listeners were slower to recognize whispered words. Compared with normal speech, listeners displayed slower reaction times to click the target image, were slower to fixate the target, and fixated the target less overall. Further, we found clear evidence that the dynamics of lexical competition are altered during whispered speech recognition. Relative to normal speech, words that overlapped with the target at the beginning (cohorts) displayed slower, reduced, and delayed activation, whereas words that overlapped with the target at the end (rhymes) exhibited faster, more robust, and longer lasting activation. CONCLUSION When listeners are confronted with whispered speech, they engage in a "wait-and-see" approach. Listeners delay lexical access, and by the time they begin to consider what word they are hearing, the beginning of the word has largely come and gone, and activation for cohorts is reduced. However, delays in lexical access actually increase consideration of rhyme competitors; the delay pushes lexical activation to a point later in processing, and the recognition system puts more weight on the word-final overlap between the target and the rhyme.
Collapse
|
15
|
Hendrickson K, Oleson J, Walker E. School-Age Children Adapt the Dynamics of Lexical Competition in Suboptimal Listening Conditions. Child Dev 2021; 92:638-649. [PMID: 33476043 DOI: 10.1111/cdev.13530] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Although the ability to understand speech in adverse listening conditions is paramount for effective communication across the life span, little is understood about how this critical processing skill develops. This study asks how the dynamics of spoken word recognition (i.e., lexical access and competition) change during soft speech in 8- to 11-year-olds (n = 26). Lexical competition and access for speech at lower intensity levels was measured using eye-tracking and the visual world paradigm. Overall the results suggest that soft speech influences the magnitude and timing of lexical access and competition. These results suggest that lexical competition is a cognitive process that can be adapted in the school-age years to help cope with increased uncertainty due to alterations in the speech signal.
Collapse
|