1
|
Labusch M, Duñabeitia JA, Perea M. Visual word identification beyond common words: The role of font and letter case in brand names. Mem Cognit 2024; 52:1673-1686. [PMID: 38724883 PMCID: PMC11522189 DOI: 10.3758/s13421-024-01570-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2024] [Indexed: 10/30/2024]
Abstract
While abstractionist theories of visual word recognition propose that perceptual elements like font and letter case are filtered out during lexical access, instance-based theories allow for the possibility that these surface details influence this process. To disentangle these accounts, we focused on brand names embedded in logotypes. The consistent visual presentation of brand names may render them much more susceptible to perceptual factors than common words. In the present study, we compared original and modified brand logos, varying in font or letter case. In Experiment 1, participants decided whether the stimuli corresponded to existing brand names or not, regardless of graphical information. In Experiment 2, participants had to categorize existing brand names semantically - whether they corresponded to a brand in the transportation sector or not. Both experiments showed longer response times for the modified brand names, regardless of font or letter-case changes. These findings challenge the notion that only abstract units drive visual word recognition. Instead, they favor those models that assume that, under some circumstances, the traces in lexical memory may contain surface perceptual information.
Collapse
Affiliation(s)
- Melanie Labusch
- Department of Methodology and ERI-Lectura, Universitat de València, Av. Blasco Ibáñez, 21, 46010, Valencia, Spain
- Universidad Nebrija, Madrid, Spain
| | | | - Manuel Perea
- Department of Methodology and ERI-Lectura, Universitat de València, Av. Blasco Ibáñez, 21, 46010, Valencia, Spain.
- Universidad Nebrija, Madrid, Spain.
| |
Collapse
|
2
|
Thierfelder P. The time course of Cantonese and Hong Kong Sign Language phonological activation: An ERP study of deaf bimodal bilingual readers of Chinese. Cognition 2024; 251:105878. [PMID: 39024841 DOI: 10.1016/j.cognition.2024.105878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 06/21/2024] [Accepted: 07/05/2024] [Indexed: 07/20/2024]
Abstract
This study investigated Cantonese and Hong Kong Sign Language (HKSL) phonological activation patterns in Hong Kong deaf readers using the ERP technique. Two experiments employing the error disruption paradigm were conducted while recording participants' EEGs. Experiment 1 focused on orthographic and speech-based phonological processing, while Experiment 2 examined sign-phonological processing. ERP analyses focused on the P200 (180-220 ms) and N400 (300-500 ms) components. The results of Experiment 1 showed that hearing readers exhibited both orthographic and phonological effects in the P200 and N400 windows, consistent with previous studies on Chinese reading. In deaf readers, significant speech-based phonological effects were observed in the P200 window, and orthographic effects spanned both the P200 and N400 windows. Comparative analysis between the two groups revealed distinct spatial distributions for orthographic and speech-based phonological ERP effects, which may indicate the engagement of different neural networks during early processing stages. Experiment 2 found evidence of sign-phonological activation in both the P200 and N400 windows among deaf readers, which may reflect the involvement of sign-phonological representations in early lexical access and later semantic integration. Furthermore, exploratory analysis revealed that higher reading fluency in deaf readers correlated with stronger orthographic effects in the P200 window and diminished effects in the N400 window, indicating that efficient orthographic processing during early lexical access is a distinguishing feature of proficient deaf readers.
Collapse
Affiliation(s)
- Philip Thierfelder
- The Centre for Sign Linguistics and Deaf Studies, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
3
|
Thierfelder P, Cai ZG, Huang S, Lin H. The Chinese lexicon of deaf readers: A database of character decisions and a comparison between deaf and hearing readers. Behav Res Methods 2024; 56:5732-5753. [PMID: 38114882 DOI: 10.3758/s13428-023-02305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2023] [Indexed: 12/21/2023]
Abstract
We present a psycholinguistic study investigating lexical effects on simplified Chinese character recognition by deaf readers. Prior research suggests that deaf readers exhibit efficient orthographic processing and decreased reliance on speech-based phonology in word recognition compared to hearing readers. In this large-scale character decision study (25 participants, each evaluating 2500 real characters and 2500 pseudo-characters), we analyzed various factors influencing character recognition accuracy and speed in deaf readers. Deaf participants demonstrated greater accuracy and faster recognition when characters were more frequent, were acquired earlier, had more strokes, displayed higher orthographic complexity, were more imageable in reference, or were less concrete in reference. Comparison with a previous study of hearing readers revealed that the facilitative effect of frequency on character decision accuracy was stronger for deaf readers than hearing readers. The effect of orthographic-phonological regularity differed significantly for the two groups, indicating that deaf readers rely more on orthographic structure and less on phonological information during character recognition. Notably, increased stroke counts (i.e., higher orthographic complexity) hindered hearing readers but facilitated recognition processes in deaf readers, suggesting that deaf readers excel at recognizing characters based on orthographic structure. The database generated from this large-scale character decision study offers a valuable resource for further research and practical applications in deaf education and literacy.
Collapse
Affiliation(s)
- Philip Thierfelder
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Zhenguang G Cai
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR.
| | - Shuting Huang
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Hao Lin
- Shanghai International Studies University, 550 Dalian Road(W), Shanghai, People's Republic of China.
| |
Collapse
|
4
|
Rocabado F, Labusch M, Perea M, Duñabeitia JA. Dissociating the Effects of Visual Similarity for Brand Names and Common Words. J Cogn 2024; 7:67. [PMID: 39220857 PMCID: PMC11363898 DOI: 10.5334/joc.397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
Abstractionist models of visual word recognition can easily accommodate the absence of visual similarity effects in misspelled common words (e.g., viotin vs. viocin) during lexical decision tasks. However, these models fail to account for the sizable effects of visual similarity observed in misspelled brand names (e.g., anazon produces longer responses and more errors than atazon). Importantly, this dissociation has only been reported in separate experiments. Thus, a crucial experiment is necessary to simultaneously examine the role of visual similarity with misspelled common words and brand names. In the current experiment, participants performed a lexical decision task using both brand names and common words. Nonword foils were created by replacing visually similar letters (e.g., anazon [baseword: amazon], anarilllo [amarillo, yellow]) or visually dissimilar letters (e.g., atazon, atarillo). Results showed sizeable visual letter similarity effects for misspelled brand names in response times and percent error. Critically, these effects were absent for misspelled common words. The pervasiveness of visual similarity effects for misspelled brand names, even in the presence of common words, challenges purely abstractionist accounts of visual word recognition. Instead, these findings support instance-based and weakly abstractionist theories, suggesting that episodic traces in the mental lexicon may retain perceptual information, particularly when words are repeatedly presented in a similar format.
Collapse
Affiliation(s)
- Francisco Rocabado
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Antonio de Nebrija, Madrid, Spain
| | - Melanie Labusch
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Antonio de Nebrija, Madrid, Spain
- Departamento de Metodología and ERI-Lectura, Universitat de València, Valencia, Spain
| | - Manuel Perea
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Antonio de Nebrija, Madrid, Spain
- Departamento de Metodología and ERI-Lectura, Universitat de València, Valencia, Spain
| | - Jon Andoni Duñabeitia
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Antonio de Nebrija, Madrid, Spain
| |
Collapse
|
5
|
Kamble V, Buyle M, Crollen V. Investigating the crowding effect on letters and symbols in deaf adults. Sci Rep 2024; 14:16161. [PMID: 38997432 PMCID: PMC11245469 DOI: 10.1038/s41598-024-66832-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 07/04/2024] [Indexed: 07/14/2024] Open
Abstract
Reading requires the transformation of a complex array of visual features into sounds and meaning. For deaf signers who experience changes in visual attention and have little or no access to the sounds of the language they read, understanding the visual constraints underlying reading is crucial. This study aims to explore a fundamental aspect of visual perception intertwined with reading: the crowding effect. This effect manifests as the struggle to distinguish a target letter when surrounded by flanker letters. Through a two-alternative forced choice task, we assessed the recognition of letters and symbols presented in isolation or flanked by two or four characters, positioned either to the left or right of fixation. Our findings reveal that while deaf individuals exhibit higher accuracy in processing letters compared to symbols, their performance falls short of that of their hearing counterparts. Interestingly, despite their proficiency with letters, deaf individuals didn't demonstrate quicker letter identification, particularly in the most challenging scenario where letters were flanked by four characters. These outcomes imply the development of a specialized letter processing system among deaf individuals, albeit one that may subtly diverge from that of their hearing counterparts.
Collapse
Affiliation(s)
- Veena Kamble
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Place de l'Université, Louvain-la-Neuve, Belgium.
| | - Margot Buyle
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Place de l'Université, Louvain-la-Neuve, Belgium
| | - Virginie Crollen
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Place de l'Université, Louvain-la-Neuve, Belgium
| |
Collapse
|
6
|
Massol S, Grainger J. On the distinction between position and order information when processing strings of characters. Atten Percept Psychophys 2024; 86:883-896. [PMID: 38453776 DOI: 10.3758/s13414-024-02872-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2024] [Indexed: 03/09/2024]
Abstract
To probe the processing of gaze-dependent positional information and gaze-independent order information when matching strings of characters, we compared effects of visual similarity (hypothesized to affect gaze-centered position coding) with the effects of character transpositions (hypothesized to affect the processing of gaze-independent order information). In Experiment 1, we obtained empirical measures of visual similarity for pairs of characters, separately for uppercase consonants and keyboard symbols. These similarity values were then used in Experiment 2 to create pairs of four-character stimuli (four letters or four symbols) that could differ by substituting one character with a different character from the same category that was visually similar (e.g., FJDK-FJBK) or dissimilar (e.g., FJVK-FJBK). We also compared the effects of transposing two characters (e.g., FBJK-FJBK) with substituting two characters (e.g., FHSK-FJBK). "Different" responses were harder to make in the single substitution condition when the substituted character was visually similar, and this effect was not conditioned by character type. On the other hand, transposition costs (i.e., greater difficulty in detecting a difference with transpositions compared with double substitutions) were greater for letters compared with symbols. We conclude that visual similarity mainly affects the generic gaze-dependent processing of complex visual features, and that the encoding of letter order involves a mechanism that is specific to reading.
Collapse
Affiliation(s)
- Stéphanie Massol
- Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Lyon, France.
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University & CNRS, Marseille, France
- Institute of Language, Communication and the Brain, Aix-Marseille University, Marseille, France
| |
Collapse
|
7
|
Perea M, Labusch M, Fernández-López M, Marcet A, Gutierrez-Sigut E, Gómez P. One more trip to Barcetona: on the special status of visual similarity effects in city names. PSYCHOLOGICAL RESEARCH 2024; 88:271-283. [PMID: 37353613 PMCID: PMC10805876 DOI: 10.1007/s00426-023-01839-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 05/21/2023] [Indexed: 06/25/2023]
Abstract
Previous research has shown that, unlike misspelled common words, misspelled brand names are sensitive to visual letter similarity effects (e.g., amazom is often recognized as a legitimate brand name, but not amazot). This pattern poses problems for those models that assume that word identification is exclusively based on abstract codes. Here, we investigated the role of visual letter similarity using another type of word often presented in a more homogenous format than common words: city names. We found a visual letter similarity effect for misspelled city names (e.g., Barcetona was often recognized as a word, but not Barcesona) for relatively short durations of the stimuli (200 ms; Experiment 2), but not when the stimuli were presented until response (Experiment 1). Notably, misspelled common words did not show a visual letter similarity effect for brief 200- and 150-ms durations (e.g., votume was not as often recognized as a word than vosume; Experiments 3-4). These findings provide further evidence that the consistency in the format of presentations may shape the representation of words in the mental lexicon, which may be more salient in scenarios where processing resources are limited (e.g., brief exposure presentations).
Collapse
Affiliation(s)
- Manuel Perea
- Universitat de València, Av. Blasco Ibáñez, 21, 46010, Valencia, Spain.
- Centro de Investigación Nebrija en Cognición, Universidad Nebrija, Madrid, Spain.
| | - Melanie Labusch
- Universitat de València, Av. Blasco Ibáñez, 21, 46010, Valencia, Spain
- Centro de Investigación Nebrija en Cognición, Universidad Nebrija, Madrid, Spain
| | | | - Ana Marcet
- Universitat de València, Av. Blasco Ibáñez, 21, 46010, Valencia, Spain
| | | | - Pablo Gómez
- California State University, San Bernardino, Palm Desert Campus, San Bernardino, USA
| |
Collapse
|
8
|
Holcomb PJ, Akers EM, Midgley KJ, Emmorey K. Orthographic and Phonological Code Activation in Deaf and Hearing Readers. J Cogn 2024; 7:19. [PMID: 38312942 PMCID: PMC10836169 DOI: 10.5334/joc.326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/11/2023] [Indexed: 02/06/2024] Open
Abstract
Grainger et al. (2006) were the first to use ERP masked priming to explore the differing contributions of phonological and orthographic representations to visual word processing. Here we adapted their paradigm to examine word processing in deaf readers. We investigated whether reading-matched deaf and hearing readers (n = 36) exhibit different ERP effects associated with the activation of orthographic and phonological codes during word processing. In a visual masked priming paradigm, participants performed a go/no-go categorization task (detect an occasional animal word). Critical target words were preceded by orthographically-related (transposed letter - TL) or phonologically-related (pseudohomophone - PH) masked non-word primes were contrasted with the same target words preceded by letter substitution (control) non-words primes. Hearing readers exhibited typical N250 and N400 priming effects (greater negativity for control compared to TL or PH primed targets), and the TL and PH priming effects did not differ. For deaf readers, the N250 PH priming effect was later (250-350 ms), and they showed a reversed N250 priming effect for TL primes in this time window. The N400 TL and PH priming effects did not differ between groups. For hearing readers, those with better phonological and spelling skills showed larger early N250 PH and TL priming effects (150-250 ms). For deaf readers, those with better phonological skills showed a larger reversed TL priming effect in the late N250 window. We speculate that phonological knowledge modulates how strongly deaf readers rely on whole-word orthographic representations and/or the mapping from sublexical to lexical representations.
Collapse
Affiliation(s)
| | - Emily M. Akers
- Department of Psychology, San Diego State University, CA, USA
| | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, CA, USA
| |
Collapse
|
9
|
Wu X, Jia H, Wang E. The neurophysiological mechanism of valence-space congruency effect: evidence from spatial Stroop task and event-related EEG features. Cogn Neurodyn 2023; 17:855-867. [PMID: 37522040 PMCID: PMC10374502 DOI: 10.1007/s11571-022-09842-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/20/2022] [Accepted: 07/01/2022] [Indexed: 11/03/2022] Open
Abstract
Metaphors commonly represent mental representations of abstract concepts. One example is the valence-space metaphor (i.e., positive word-up, negative word-down), which suggests that the vertical position of positive/negative words can modulate the evaluation of word valence. Here, the spatial Stroop task and electroencephalography (EEG) techniques were used to explore the neural mechanism of the valence-space congruency effect in valence-space metaphors. This study showed that the reaction time of the congruent condition (i.e., positive words at the top and negative words at the bottom of the screen) was significantly shorter than that of the incongruent condition (i.e., positive words at the bottom and negative words at the top of the screen), while the accuracy rate of the congruent condition was significantly larger than that of the incongruent condition. The analysis of the amplitudes of event-related potential components revealed that congruency between the vertical position and valence of Chinese words could significantly modulate the amplitude of attention allocation-related P2 component and semantic violations related N400 component. Moreover, statistical tests conducted on the post-stimulus inter-trial phase coherence (ITPC) found that the ITPC value of an alpha band region of interest (8-12 Hz, 100-300 ms post-stimulus) in the time-frequency plane of the congruent condition was significantly larger than that of the incongruent condition. Above all, the current study proved the existence of the space-valence congruency effect in Chinese words and provided some interesting neurophysiological mechanisms regarding the valence-space metaphor.
Collapse
Affiliation(s)
- Xiangci Wu
- Institute of Psychology and Behavior, Henan University, 475004 Kaifeng, China
- School of Psychology, Henan University, 475004 Kaifeng, China
| | - Huibin Jia
- Institute of Psychology and Behavior, Henan University, 475004 Kaifeng, China
- School of Psychology, Henan University, 475004 Kaifeng, China
| | - Enguo Wang
- Institute of Psychology and Behavior, Henan University, 475004 Kaifeng, China
- School of Psychology, Henan University, 475004 Kaifeng, China
| |
Collapse
|
10
|
Sehyr ZS, Midgley KJ, Emmorey K, Holcomb PJ. Asymetric Event-Related Potential Priming Effects Between English Letters and American Sign Language Fingerspelling Fonts. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:361-381. [PMID: 37546690 PMCID: PMC10403274 DOI: 10.1162/nol_a_00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 02/23/2023] [Indexed: 08/08/2023]
Abstract
Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL-English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL-English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.
Collapse
Affiliation(s)
- Zed Sevcikova Sehyr
- San Diego State University Research Foundation, San Diego State University, San Diego, CA, USA
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | | | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| |
Collapse
|
11
|
Lee B, Martinez PM, Midgley KJ, Holcomb PJ, Emmorey K. Sensitivity to orthographic vs. phonological constraints on word recognition: An ERP study with deaf and hearing readers. Neuropsychologia 2022; 177:108420. [PMID: 36396091 PMCID: PMC10152474 DOI: 10.1016/j.neuropsychologia.2022.108420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/28/2022] [Accepted: 11/07/2022] [Indexed: 11/16/2022]
Abstract
The role of phonology in word recognition has previously been investigated using a masked lexical decision task and transposed letter (TL) nonwords that were either pronounceable (barve) or unpronounceable (brvae). We used event-related potentials (ERPs) to investigate these effects in skilled deaf readers, who may be more sensitive to orthotactic than phonotactic constraints, which are conflated in English. Twenty deaf and twenty hearing adults completed a masked lexical decision task while ERPs were recorded. The groups were matched in reading skill and IQ, but deaf readers had poorer phonological ability. Deaf readers were faster and more accurate at rejecting TL nonwords than hearing readers. Neither group exhibited an effect of nonword pronounceability in RTs or accuracy. For both groups, the N250 and N400 components were modulated by lexicality (more negative for nonwords). The N250 was not modulated by nonword pronounceability, but pronounceable nonwords elicited a larger amplitude N400 than unpronounceable nonwords. Because pronounceable nonwords are more word-like, they may incite activation that is unresolved when no lexical entry is found, leading to a larger N400 amplitude. Similar N400 pronounceability effects for deaf and hearing readers, despite differences in phonological sensitivity, suggest these TL effects arise from sensitivity to lexical-level orthotactic constraints. Deaf readers may have an advantage in processing TL nonwords because of enhanced early visual attention and/or tight orthographic-to-semantic connections, bypassing the phonologically mediated route to word recognition.
Collapse
Affiliation(s)
- Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, United States.
| | | | | | | | | |
Collapse
|
12
|
Winsler K, Holcomb PJ, Emmorey K. Electrophysiological patterns of visual word recognition in deaf and hearing readers: An ERP mega-study. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:636-650. [PMID: 37304206 PMCID: PMC10249718 DOI: 10.1080/23273798.2022.2135746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 10/03/2022] [Indexed: 06/13/2023]
Abstract
Deaf and hearing readers have different access to spoken phonology which may affect the representation and recognition of written words. We used ERPs to investigate how a matched sample of deaf and hearing adults (total n = 90) responded to lexical characteristics of 480 English words in a go/no-go lexical decision task. Results from mixed effect regression models showed a) visual complexity produced small effects in opposing directions for deaf and hearing readers, b) similar frequency effects, but shifted earlier for deaf readers, c) more pronounced effects of orthographic neighborhood density for hearing readers, and d) more pronounced effects of concreteness for deaf readers. We suggest hearing readers have visual word representations that are more integrated with phonological representations, leading to larger lexically-mediated effects of neighborhood density. Conversely, deaf readers weight other sources of information more heavily, leading to larger semantically-mediated effects and altered responses to low-level visual variables.
Collapse
Affiliation(s)
- Kurt Winsler
- Department of Psychology, University of California - Davis. Davis, CA, United States
| | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, United States
| | - Karen Emmorey
- School of Speech, Language and Hearing Science, San Diego State University, San Diego, CA, United States
| |
Collapse
|
13
|
Perea M, Baciero A, Labusch M, Fernández‐López M, Marcet A. Are brand names special words? Letter visual-similarity affects the identification of brand names, but not common words. Br J Psychol 2022; 113:835-852. [PMID: 35107840 PMCID: PMC9545185 DOI: 10.1111/bjop.12557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 01/11/2022] [Accepted: 01/13/2022] [Indexed: 12/04/2022]
Abstract
Brand names are often considered a special type of words of special relevance to examine the role of visual codes during reading: unlike common words, brand names are typically presented with the same letter-case configuration (e.g., IKEA, adidas). Recently, Pathak et al. (European Journal of Marketing, 2019, 53, 2109) found an effect of visual similarity for misspelled brand names when the participants had to decide whether the brand name was spelled correctly or not (e.g., tacebook [baseword: facebook] was responded more slowly and less accurately than xacebook). This finding is at odds with both orthographically based visual-word recognition models and prior experiments using misspelled common words (e.g., viotin [baseword: violin] is identified as fast as viocin). To solve this puzzle, we designed two experiments in which the participants had to decide whether the presented item was written correctly. In Experiment 1, following a procedure similar to Pathak et al. (European Journal of Marketing, 2019, 53, 2109), we examined the effect of visual similarity on misspelled brand names with/without graphical information (e.g., anazon vs. atazon [baseword: amazon]). Experiment 2 was parallel to Experiment 1, but we focused on misspelled common words (e.g., anarillo vs. atarillo; baseword: amarillo [yellow in Spanish]). Results showed a sizeable effect of visual similarity on misspelled brand names - regardless of their graphical information, but not on misspelled common words. These findings suggest that visual codes play a greater role when identifying brand names than common words. We examined how models of visual-word recognition can account for this dissociation.
Collapse
Affiliation(s)
- Manuel Perea
- Universitat de ValènciaValenciaSpain
- Universidad Antonio de NebrijaMadridSpain
| | - Ana Baciero
- Universidad Antonio de NebrijaMadridSpain
- Bournemouth UniversityBournemouthUK
| | | | | | | |
Collapse
|
14
|
Predictors of Word and Text Reading Fluency of Deaf Children in Bilingual Deaf Education Programmes. LANGUAGES 2022. [DOI: 10.3390/languages7010051] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
Collapse
|