1
|
Evans S, Skinner CH, Wolbers K, Mee Bell S, Shahan C. Enhancing word signing in hearing students with reading disorders using computer-based learning trials. J Appl Behav Anal 2024. [PMID: 38742862 DOI: 10.1002/jaba.1082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 05/02/2024] [Indexed: 05/16/2024]
Abstract
Multiple-baseline-across-word-sets designs were used to determine whether a computer-based intervention would enhance accurate word signing with four participants. Each participant was a hearing college student with reading disorders. Learning trials included 3 s to observe printed words on the screen and a video model performing the sign twice (i.e., simultaneous prompting), 3 s to make the sign, 3 s to observe the same clip, and 3 s to make the sign again. For each participant and word set, no words were accurately signed during baseline. After the intervention, all four participants increased their accurate word signing across all three word sets, providing 12 demonstrations of experimental control. For each participant, accurate word signing was maintained. Application of efficient, technology-based, simultaneous prompting interventions for enhancing American Sign Language learning and future research designed to investigate causal mechanisms and optimize intervention effects are discussed.
Collapse
Affiliation(s)
- Sara Evans
- College of Education, Health & Human Sciences, University of Tennessee, Knoxville, TN, USA
| | - Christopher H Skinner
- College of Education, Health & Human Sciences, University of Tennessee, Knoxville, TN, USA
| | - Kimberly Wolbers
- College of Education, Health & Human Sciences, University of Tennessee, Knoxville, TN, USA
| | - Sherry Mee Bell
- College of Education, Health & Human Sciences, University of Tennessee, Knoxville, TN, USA
| | - Cheryl Shahan
- College of Education, Health & Human Sciences, University of Tennessee, Knoxville, TN, USA
| |
Collapse
|
2
|
Ship H, Shankar S, Brosco JP, Baer S, Michalowski SE, Arana J, Gregory D, Falcon A. Shared Decision-Making at the Intersection of Disability, Culture, and Language Accessibility: An Educational Session for Medical Students. MedEdPORTAL 2024; 20:11396. [PMID: 38722734 PMCID: PMC11058081 DOI: 10.15766/mep_2374-8265.11396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/12/2024] [Indexed: 05/12/2024]
Abstract
Introduction People with disabilities and those with non-English language preferences have worse health outcomes than their counterparts due to barriers to communication and poor continuity of care. As members of both groups, people who are Deaf users of American Sign Language have compounded health disparities. Provider discomfort with these specific demographics is a contributing factor, often stemming from insufficient training in medical programs. To help address these health disparities, we created a session on disability, language, and communication for undergraduate medical students. Methods This 2-hour session was developed as a part of a 2020 curriculum shift for a total of 404 second-year medical student participants. We utilized a retrospective postsession survey to analyze learning objective achievement through a comparison of medians using the Wilcoxon signed rank test (α = .05) for the first 2 years of course implementation. Results When assessing 158 students' self-perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session compared to their retrospective presession confidence for all four learning objectives (ps < .001, respectively). Responses signifying learning objective achievement (scores of 4, probably yes, or 5, definitely yes), when averaged across the first 2 years of implementation, increased from 73% before the session to 98% after the session. Discussion Our evaluation suggests medical students could benefit from increased educational initiatives on disability culture and health disparities caused by barriers to communication, to strengthen cultural humility, the delivery of health care, and, ultimately, health equity.
Collapse
Affiliation(s)
- Hannah Ship
- Third-Year Medical Student, University of Miami Miller
School of Medicine
| | - Sahana Shankar
- Fourth-Year Medical Student, University of Miami Miller
School of Medicine
| | - Jeffrey P. Brosco
- Professor, Department of Pediatrics, University of Miami
Miller School of Medicine
| | - Shelly Baer
- Licensed Clinical Social Worker, Mailman Center for Child
Development, University of Miami Miller School of Medicine
| | | | - Jairo Arana
- Clinical Program Coordinator, Mailman Center for Child
Development, University of Miami Miller School of Medicine
| | - Damian Gregory
- Consultant, Mailman Center for Child Development, University
of Miami Miller School of Medicine
| | - Ashley Falcon
- Associate Professor, School of Nursing and Health Sciences,
University of Miami
| |
Collapse
|
3
|
Ofori-Sanzo K, Geer L, Embry K. Syntax intervention in American Sign Language: an exploratory case study. J Deaf Stud Deaf Educ 2024; 29:105-114. [PMID: 37973400 DOI: 10.1093/deafed/enad048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 11/19/2023]
Abstract
This case study describes the use of a syntax intervention with two deaf children who did not acquire a complete first language (L1) from birth. It looks specifically at their ability to produce subject-verb-object (SVO) sentence structure in American Sign Language (ASL) after receiving intervention. This was an exploratory case study in which investigators utilized an intervention that contained visuals to help teach SVO word order to young deaf children. Baseline data were collected over three sessions before implementation of a targeted syntax intervention and two follow-up sessions over 3-4 weeks. Both participants demonstrated improvements in their ability to produce SVO structure in ASL in 6-10 sessions. Visual analysis revealed a positive therapeutic trend that was maintained in follow-up sessions. These data provide preliminary evidence that a targeted intervention may help young deaf children with an incomplete L1 learn to produce basic word order in ASL. Results from this case study can help inform the practice of professionals working with signing deaf children who did not acquire a complete L1 from birth (e.g., speech-language pathologists, deaf mentors/coaches, ASL specialists, etc.). Future research should investigate the use of this intervention with a larger sample of deaf children.
Collapse
Affiliation(s)
| | - Leah Geer
- California State University Sacramento, Sacramento, CA, United States
| | - Kinya Embry
- University of Kentucky, Lexington, KY, United States
| |
Collapse
|
4
|
Herrmann AK, Cowgill B, Guthmann D, Richardson J, Cindy Chang L, Crespi CM, Glenn E, McKee M, Berman B. Developing and Evaluating a School-Based Tobacco and E-Cigarette Prevention Program for Deaf and Hard-of-Hearing Youth. Health Promot Pract 2024; 25:65-76. [PMID: 36760068 PMCID: PMC10768334 DOI: 10.1177/15248399221151180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
School-based programs are an important tobacco prevention tool. Yet, existing programs are not suitable for Deaf and Hard-of-Hearing (DHH) youth. Moreover, little research has examined the use of the full range of tobacco products and related knowledge in this group. To address this gap and inform development of a school-based tobacco prevention program for this population, we conducted a pilot study among DHH middle school (MS) and high school (HS) students attending Schools for the Deaf and mainstream schools in California (n = 114). American Sign Language (ASL) administered surveys, before and after receipt of a draft curriculum delivered by health or physical education teachers, assessed product use and tobacco knowledge. Thirty-five percent of students reported exposure to tobacco products at home, including cigarettes (19%) and e-cigarettes (15%). Tobacco knowledge at baseline was limited; 35% of students knew e-cigarettes contain nicotine, and 56% were aware vaping is prohibited on school grounds. Current product use was reported by 16% of students, most commonly e-cigarettes (12%) and cigarettes (10%); overall, 7% of students reported dual use. Use was greater among HS versus MS students. Changes in student knowledge following program delivery included increased understanding of harmful chemicals in tobacco products, including nicotine in e-cigarettes. Post-program debriefings with teachers yielded specific recommendations for modifications to better meet the educational needs of DHH students. Findings based on student and teacher feedback will guide curriculum development and inform next steps in our program of research aimed to prevent tobacco use in this vulnerable and heretofore understudied population group.
Collapse
Affiliation(s)
- Alison K. Herrmann
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| | - Burton Cowgill
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| | | | - Jessica Richardson
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| | - L. Cindy Chang
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| | - Catherine M. Crespi
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| | - Everett Glenn
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
| | | | - Barbara Berman
- UCLA Kaiser Permanente Center for Health Equity, Los Angeles, CA, USA
- UCLA Fielding School of Public Health, Los Angeles, CA, USA
- UCLA Jonsson Comprehensive Cancer Center, Los Angeles, CA, USA
| |
Collapse
|
5
|
Lieberman AM, Mitchiner J, Pontecorvo E. Hearing parents learning American Sign Language with their deaf children: a mixed-methods survey. Appl Linguist Rev 2024; 15:309-333. [PMID: 38221976 PMCID: PMC10785677 DOI: 10.1515/applirev-2021-0120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 04/27/2022] [Indexed: 01/16/2024]
Abstract
Hearing parents with deaf children face difficult decisions about what language(s) to use with their child. Sign languages such as American Sign Language (ASL) are fully accessible to deaf children, yet most hearing parents are not proficient in ASL prior to having a deaf child. Parents are often discouraged from learning ASL based in part on an assumption that it will be too difficult, yet there is little evidence supporting this claim. In this mixed-methods study, we surveyed hearing parents of deaf children (n = 100) who had learned ASL to learn more about their experiences. In their survey responses, parents identified a range of resources that supported their ASL learning as well as frequent barriers. Parents identified strongly with belief statements indicating the importance of ASL and affirmed that learning ASL is attainable for hearing parents. We discuss the implications of this study for parents who are considering ASL as a language choice and for the professionals who guide them.
Collapse
Affiliation(s)
- Amy M. Lieberman
- Wheelock College of Education and Human Development, Boston University, Boston, MA, USA
| | - Julie Mitchiner
- Department of Education, Gallaudet University, Washington, DC, USA
| | - Elana Pontecorvo
- Wheelock College of Education and Human Development, Boston University, Boston, MA, USA
| |
Collapse
|
6
|
Conner KR, Jones CM, Wood N, Aldalur A, Paracha M, Powell SJ, Nie Y, Dillon KM, Rotoli J. Use of Routine Emergency Department Care Practices with Deaf American Sign Language Users. J Emerg Med 2023; 65:e163-e171. [PMID: 37640633 PMCID: PMC10653031 DOI: 10.1016/j.jemermed.2023.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 04/20/2023] [Accepted: 05/26/2023] [Indexed: 08/31/2023]
Abstract
BACKGROUND Deaf individuals who communicate using American Sign Language (ASL) seem to experience a range of disparities in health care, but there are few empirical data. OBJECTIVE To examine the provision of common care practices in the emergency department (ED) to this population. METHODS ED visits in 2018 at a U.S. academic medical center were assessed retrospectively in Deaf adults who primarily use ASL (n = 257) and hearing individuals who primarily use English, selected at random (n = 429). Logistic regression analyses adjusted for confounders compared the groups on the provision or nonprovision of four routine ED care practices (i.e., laboratories ordered, medications ordered, images ordered, placement of peripheral intravenous line [PIV]) and on ED disposition (admitted to hospital or not admitted). RESULTS The ED encounters with Deaf ASL users were less likely to include laboratory tests being ordered: adjusted odds ratio 0.68 and 95% confidence interval 0.47-0.97. ED encounters with Deaf individuals were also less likely to include PIV placement, less likely to result in images being ordered in the ED care of ASL users of high acuity compared with English users of high acuity (but not low acuity), and less likely to result in hospital admission. CONCLUSION Results suggest disparate provision of several types of routine ED care for adult Deaf ASL users. Limitations include the observational study design at a single site and reliance on the medical record, underscoring the need for further research and potential reasons for disparate ED care with Deaf individuals.
Collapse
Affiliation(s)
- Kenneth R Conner
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York; Department of Psychiatry, University of Rochester Medical Center, Rochester, New York
| | - Courtney M Jones
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York
| | - Nancy Wood
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York
| | - Aileen Aldalur
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York; Department of Psychiatry, University of Rochester Medical Center, Rochester, New York
| | - Mariam Paracha
- Center for Health + Technology, University of Rochester Medical Center, Rochester, New York; Department of Science and Mathematics, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, New York
| | - Stephen J Powell
- Department of Neurology, University of Rochester Medical Center, Rochester, New York
| | - Yunbo Nie
- Department of Psychiatry and Behavioral Health, Renaissance School of Medicine, Stony Brook University, Stony Brook, New York
| | - Kevin M Dillon
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York
| | - Jason Rotoli
- Department of Emergency Medicine, University of Rochester Medical Center, Rochester, New York
| |
Collapse
|
7
|
Sehyr ZS, Midgley KJ, Emmorey K, Holcomb PJ. Asymetric Event-Related Potential Priming Effects Between English Letters and American Sign Language Fingerspelling Fonts. Neurobiol Lang (Camb) 2023; 4:361-381. [PMID: 37546690 PMCID: PMC10403274 DOI: 10.1162/nol_a_00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 02/23/2023] [Indexed: 08/08/2023]
Abstract
Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL-English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL-English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.
Collapse
Affiliation(s)
- Zed Sevcikova Sehyr
- San Diego State University Research Foundation, San Diego State University, San Diego, CA, USA
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | | | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| |
Collapse
|
8
|
Brozdowski C, Emmorey K. Using transitional information in sign and gesture perception. Acta Psychol (Amst) 2023; 236:103923. [PMID: 37087958 PMCID: PMC10576459 DOI: 10.1016/j.actpsy.2023.103923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 04/04/2023] [Accepted: 04/17/2023] [Indexed: 04/25/2023] Open
Abstract
For sign languages, transitional movements of the hands are fully visible and may be used to predict upcoming linguistic input. We investigated whether and how deaf signers and hearing nonsigners use transitional information to detect a target item in a string of either pseudosigns or grooming gestures, as well as whether motor imagery ability was related to this skill. Transitional information between items was either intact (Normal videos), digitally altered such that the hands were selectively blurred (Blurred videos), or edited to only show the frame prior to the transition which was frozen for the entire transition period, removing all transitional information (Static videos). For both pseudosigns and gestures, signers and nonsigners had faster target detection times for Blurred than Static videos, indicating similar use of movement transition cues. For linguistic stimuli (pseudosigns), only signers made use of transitional handshape information, as evidenced by faster target detection times for Normal than Blurred videos. This result indicates that signers can use their linguistic knowledge to interpret transitional handshapes to predict the upcoming signal. Signers and nonsigners did not differ in motor imagery abilities, but only non-signers exhibited evidence of using motor imagery as a prediction strategy. Overall, these results suggest that signers use transitional movement and handshape cues to facilitate sign recognition.
Collapse
Affiliation(s)
- Chris Brozdowski
- San Diego State University, University of California, San Diego, United States of America.
| | - Karen Emmorey
- San Diego State University, University of California, San Diego, United States of America.
| |
Collapse
|
9
|
Narayan N, Schecter A, McKee M, Rotoli J. Deaf Health Pathway - Immersing Medical Students in the Cultural and Language Aspects of Deaf Health. Med Sci Educ 2023; 33:11-13. [PMID: 36713277 PMCID: PMC9862222 DOI: 10.1007/s40670-023-01738-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/13/2023] [Indexed: 06/18/2023]
Abstract
Language and cultural-concordant healthcare providers improve health outcomes for deaf patients, yet training opportunities are lacking. The Deaf Health Pathway was developed to train medical students on cultural humility and communication in American Sign Language to better connect with deaf community members and bridge the gap in their healthcare.
Collapse
Affiliation(s)
- Neha Narayan
- School of Medicine and Dentistry, University of Rochester, Rochester, NY USA
| | - Arielle Schecter
- School of Medicine and Dentistry, University of Rochester, Rochester, NY USA
| | - Michael McKee
- Department of Family Medicine, University of Michigan, Ann Arbor, MI USA
| | - Jason Rotoli
- Department of Emergency Medicine, University of Rochester Medical Center, Box 655, 601 Elmwood Ave, Rochester, NY 14642 USA
| |
Collapse
|
10
|
Pyers JE, Emmorey K. The iconic motivation for the morphophonological distinction between noun-verb pairs in American Sign Language does not reflect common human construals of objects and actions. Lang Cogn 2022; 14:622-644. [PMID: 36426211 PMCID: PMC9681175 DOI: 10.1017/langcog.2022.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Collapse
Affiliation(s)
- Jennie E. Pyers
- Wellesley College, Psychology Department, Wellesley, MA, USA
| | - Karen Emmorey
- San Diego State University, School of Speech, Language and Hearing Sciences, San Diego, CA, USA
| |
Collapse
|
11
|
Lee B, Secora K. Fingerspelling and Its Role in Translanguaging. Languages (Basel) 2022; 7:278. [PMID: 37920277 PMCID: PMC10622114 DOI: 10.3390/languages7040278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Fingerspelling is a critical component of many sign languages. This manual representation of orthographic code is one key way in which signers engage in translanguaging, drawing from all of their linguistic and semiotic resources to support communication. Translanguaging in bimodal bilinguals is unique because it involves drawing from languages in different modalities, namely a signed language like American Sign Language and a spoken language like English (or its written form). Fingerspelling can be seen as a unique product of the unified linguistic system that translanguaging theories purport, as it blends features of both sign and print. The goals of this paper are twofold: to integrate existing research on fingerspelling in order to characterize it as a cognitive-linguistic phenomenon and to discuss the role of fingerspelling in translanguaging and communication. We will first review and synthesize research from linguistics and cognitive neuroscience to summarize our current understanding of fingerspelling, its production, comprehension, and acquisition. We will then discuss how fingerspelling relates to translanguaging theories and how it can be incorporated into translanguaging practices to support literacy and other communication goals.
Collapse
Affiliation(s)
- Brittany Lee
- Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Kristen Secora
- Theory and Practice in Teacher Education, University of Tennessee Knoxville, Knoxville, TN 37996, USA
| |
Collapse
|
12
|
Rubio-Fernandez P, Wienholz A, Ballard CM, Kirby S, Lieberman AM. Adjective position and referential efficiency in American Sign Language: Effects of adjective semantics, sign type and age of sign exposure. J Mem Lang 2022; 126:104348. [PMID: 38665819 PMCID: PMC11044888 DOI: 10.1016/j.jml.2022.104348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.
Collapse
|
13
|
Gappmayr P, Lieberman AM, Pyers J, Caselli NK. Do parents modify child-directed signing to emphasize iconicity? Front Psychol 2022; 13:920729. [PMID: 36092032 PMCID: PMC9453873 DOI: 10.3389/fpsyg.2022.920729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Iconic signs are overrepresented in the vocabularies of young deaf children, but it is unclear why. It is possible that iconic signs are easier for children to learn, but it is also possible that adults use iconic signs in child-directed signing in ways that make them more learnable, either by using them more often than less iconic signs or by lengthening them. We analyzed videos of naturalistic play sessions between parents and deaf children (n = 24 dyads) aged 9-60 months. To determine whether iconic signs are overrepresented during child-directed signing, we compared the iconicity of actual parent productions to the iconicity of simulated vocabularies designed to estimate chance levels of iconicity. For almost all dyads, parent sign types and tokens were not more iconic than the simulated vocabularies, suggesting that parents do not select more iconic signs during child-directed signing. To determine whether iconic signs are more likely to be lengthened, we ran a linear regression predicting sign duration, and found an interaction between age and iconicity: while parents of younger children produced non-iconic and iconic signs with similar durations, parents of older children produced non-iconic signs with shorter durations than iconic signs. Thus, parents sign more quickly with older children than younger children, and iconic signs appear to resist that reduction in sign length. It is possible that iconic signs are perceptually available longer, and their availability is a candidate hypothesis as to why iconic signs are overrepresented in children's vocabularies.
Collapse
Affiliation(s)
- Paris Gappmayr
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Amy M. Lieberman
- Wheelock College of Education and Human Development, Boston University, Boston, MA, United States
| | - Jennie Pyers
- Department of Psychology, Wellesley College, Wellesley, MA, United States
| | - Naomi K. Caselli
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| |
Collapse
|
14
|
Hou L. A Usage-Based Proposal for Argument Structure of Directional Verbs in American Sign Language. Front Psychol 2022; 13:808493. [PMID: 35664145 PMCID: PMC9157181 DOI: 10.3389/fpsyg.2022.808493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
Verb agreement in signed languages has received substantial attention for a long time. Despite the numerous analyses about the linguistic status of verb agreement, there is little discussion about the argument structure associated with "directional verbs," also known as agreeing/agreement or indicating verbs. This paper proposes a usage-based approach for analyzing argument structure constructions of directional verbs in American Sign Language (ASL). The proposal offers low-level constructions for reported speech, non-dedicated passive and reflexive, and stance verb constructions, which capture the patterns, abstracted from recurring usage events, that are part of users' linguistic knowledge. The approach has potential to push the field of sign linguistics in new directions of understanding the interplay of language use and structure.
Collapse
Affiliation(s)
- Lynn Hou
- Department of Linguistics, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|
15
|
Lieberman AM, Fitch A, Borovsky A. Flexible fast-mapping: Deaf children dynamically allocate visual attention to learn novel words in American Sign Language. Dev Sci 2022; 25:e13166. [PMID: 34355837 PMCID: PMC8818049 DOI: 10.1111/desc.13166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 06/10/2021] [Accepted: 07/30/2021] [Indexed: 11/29/2022]
Abstract
Word learning in young children requires coordinated attention between language input and the referent object. Current accounts of word learning are based on spoken language, where the association between language and objects occurs through simultaneous and multimodal perception. In contrast, deaf children acquiring American Sign Language (ASL) perceive both linguistic and non-linguistic information through the visual mode. In order to coordinate attention to language input and its referents, deaf children must allocate visual attention optimally between objects and signs. We conducted two eye-tracking experiments to investigate how young deaf children allocate attention and process referential cues in order to fast-map novel signs to novel objects. Participants were deaf children learning ASL between the ages of 17 and 71 months. In Experiment 1, participants (n = 30) were presented with a novel object and a novel sign, along with a referential cue that occurred either before or after the sign label. In Experiment 2, a new group of participants (n = 32) were presented with two novel objects and a novel sign, so that the referential cue was critical for identifying the target object. Across both experiments, participants showed evidence for fast-mapping the signs regardless of the timing of the referential cue. Individual differences in children's allocation of attention during exposure were correlated with their ability to fast-map the novel signs at test. This study provides first evidence for fast-mapping in sign language, and contributes to theoretical accounts of how word learning develops when all input occurs in the visual modality.
Collapse
Affiliation(s)
- Amy M Lieberman
- Wheelock College of Education and Human Development, Boston University, 2 Silber Way, Boston, Massachusetts, 02215, USA
| | - Allison Fitch
- Department of Psychology, Rochester Institute of Technology, Rochester, New York, USA
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
16
|
James TG, Coady KA, Stacciarini JMR, McKee MM, Phillips DG, Maruca D, Cheong J. "They're Not Willing To Accommodate Deaf patients": Communication Experiences of Deaf American Sign Language Users in the Emergency Department. Qual Health Res 2022; 32:48-63. [PMID: 34823402 DOI: 10.1177/10497323211046238] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deaf people who use American Sign Language (ASL) are more likely to use the emergency department (ED) than their hearing English-speaking counterparts and are also at higher risk of receiving inaccessible communication. The purpose of this study is to explore the ED communication experience of Deaf patients. A descriptive qualitative study was performed by interviewing 11 Deaf people who had used the ED in the past 2 years. Applying a descriptive thematic analysis, we developed five themes: (1) requesting communication access can be stressful, frustrating, and time-consuming; (2) perspectives and experiences with Video Remote Interpreting (VRI); (3) expectations, benefits, and drawbacks of using on-site ASL interpreters; (4) written and oral communication provides insufficient information to Deaf patients; and (5) ED staff and providers lack cultural sensitivity and awareness towards Deaf patients. Findings are discussed with respect to medical and interpreting ethics to improve ED communication for Deaf patients.
Collapse
Affiliation(s)
- Tyler G James
- Department of Health Education and Behavior, 3463University of Florida, Gainesville, FL, USA
| | - Kyle A Coady
- Department of Health Education and Behavior, 3463University of Florida, Gainesville, FL, USA
| | - Jeanne-Marie R Stacciarini
- Department of Family, Community and Health System Science, College of Nursing, 3463University of Florida, Gainesville, FL, USA
| | - Michael M McKee
- Department of Family Medicine, 12266University of Michigan, Ann Arbor, MI, USA
| | | | | | - JeeWon Cheong
- Department of Health Education and Behavior, 3463University of Florida, Gainesville, FL, USA
| |
Collapse
|
17
|
Anderson SM, Flores AL, Baldwin LZ, Phillips CP, Meunier J. Closing the Information Gap: Making COVID-19 Information Accessible for People with Disabilities. Assist Technol Outcomes Benefits 2022; 16:86-103. [PMID: 38618159 PMCID: PMC11010351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
It is essential that people with disabilities have equitable access to COVID-19 communication resources to protect themselves, their families, and their communities. The Accessible Materials and Culturally Relevant Messages for Individuals with Disabilities project aimed to deliver essential COVID-19 information in braille, American Sign Language (ASL), simplified text, and other alternative formats, along with providing additional tools and trainings that people with disabilities and organizations that serve them can use to apply the COVID-19 guidance. Lessons learned from this project can be implemented in future public health emergencies as well as in general public health messaging for people with disabilities. This project, led by Georgia Tech's Center for Inclusive Design and Innovation (CIDI) and with technical assistance from the Centers for Disease Control and Prevention (CDC), was supported by the CDC Foundation, using funds from the CDC Foundation's COVID-19 Emergency Response Fund.
Collapse
Affiliation(s)
| | - Alina L. Flores
- Centers for Disease Control and Prevention, College of Design, Georgia Institute of Technology
| | - Laura Z. Baldwin
- Centers for Disease Control and Prevention, College of Design, Georgia Institute of Technology
| | - Carolyn P. Phillips
- Center for Inclusive Design and Innovation, College of Design, Georgia Institute of Technology
| | - Jennifer Meunier
- Centers for Disease Control and Prevention, College of Design, Georgia Institute of Technology
| |
Collapse
|
18
|
Abstract
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.
Collapse
|
19
|
Fitch A, Arunachalam S, Lieberman AM. Mapping Word to World in ASL: Evidence from a Human Simulation Paradigm. Cogn Sci 2021; 45:e13061. [PMID: 34861057 PMCID: PMC9365062 DOI: 10.1111/cogs.13061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 09/27/2021] [Accepted: 10/05/2021] [Indexed: 11/30/2022]
Abstract
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent-child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.
Collapse
Affiliation(s)
- Allison Fitch
- Deaf Education and Deaf Studies, Boston University.,Psychology, Rochester Institute of Technology
| | | | | |
Collapse
|
20
|
James TG, Sullivan MK, Butler JD, McKee MM. Promoting health equity for deaf patients through the electronic health record. J Am Med Inform Assoc 2021; 29:213-216. [PMID: 34741507 DOI: 10.1093/jamia/ocab239] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 10/13/2021] [Indexed: 11/14/2022] Open
Abstract
Language status can be conceptualized as an equity-relevant variable, particularly for non-English-speaking populations. Deaf and hard-of-hearing (DHH) individuals who use American Sign Language (ASL) to communicate comprise one such group and are understudied in health services research. DHH individuals are at high-risk of receiving lower-quality care due to ineffective patient-provider communication. This perspective outlines barriers to health equity research serving DHH ASL-users due to systems developed by large-scale informatics networks (eg, the Patient-Centered Clinical Outcomes Research Network), and institutional policies on self-serve cohort discovery tools. We list potential to help adequate capture of language status of DHH ASL-users to promote health equity for this population.
Collapse
Affiliation(s)
- Tyler G James
- Department of Family Medicine, University of Michigan, Ann Arbor, Michigan, USA.,Department of Health Education and Behavior, University of Florida, Gainesville, Florida, USA
| | | | - Joshua D Butler
- Clinical and Translational Sciences Institute, University of Rochester Medical Center, Rochester, New York, USA
| | - Michael M McKee
- Department of Family Medicine, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
21
|
Rotoli JM, Hancock S, Park C, Demers-Mcletchie S, Panko TL, Halle T, Wills J, Scarpino J, Merrill J, Cushman J, Jones C. Emergency Medical Services Communication Barriers and the Deaf American Sign Language User. PREHOSP EMERG CARE 2021; 26:437-445. [PMID: 34060987 DOI: 10.1080/10903127.2021.1936314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Objective: We sought to identify current Emergency Medical Services (EMS) practitioner comfort levels and communication strategies when caring for the Deaf American Sign Language (ASL) user. Additionally, we created and evaluated the effect of an educational intervention and visual communication tool on EMS practitioner comfort levels and communication. Methods: This was a descriptive study assessing communication barriers at baseline and after the implementation of a novel educational intervention with cross-sectional surveys conducted at three time points (pre-, immediate-post, and three months post-intervention). Descriptive statistics characterized the study sample and we quantified responses from the baseline survey and both post-intervention surveys. Results: There were 148 EMS practitioners who responded to the baseline survey. The majority of participants (74%; 109/148) previously responded to a 9-1-1 call for a Deaf patient and 24% (35/148) reported previous training regarding the Deaf community. The majority felt that important details were lost during communication (83%; 90/109), reported that the Deaf patient appeared frustrated during an encounter (72%; 78/109), and felt that communication limited patient care (67%; 73/109). When interacting with a Deaf person, the most common communication strategies included written text (90%; 98/109), friend/family member (90%; 98/109), lip reading (55%; 60/109), and spoken English (50%; 55/109). Immediately after the training, most participants reported that the educational training expanded their knowledge of Deaf culture (93%; 126/135), communication strategies to use (93%; 125/135), and common pitfalls to avoid (96%; 129/135) when caring for Deaf patients. At 3 months, all participants (100%, 79/79) reported that the educational module was helpful. Some participants (19%, 15/79) also reported using the communication tool with other non-English speaking patients. Conclusions: The majority of EMS practitioners reported difficulty communicating with Deaf ASL users and acknowledged a sense of patient frustration. Nearly all participants felt the educational training was beneficial and clinically relevant; three months later, all participants found it to still be helpful. Additionally, the communication tool may be applicable to other populations that use English as a second language.
Collapse
Affiliation(s)
- Jason M Rotoli
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Sarah Hancock
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Chanjun Park
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Susan Demers-Mcletchie
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Tiffany L Panko
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Trevor Halle
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Jennifer Wills
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Julie Scarpino
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Johannah Merrill
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Jeremy Cushman
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| | - Courtney Jones
- Department of Emergency Medicine, University of Rochester, Rochester, New York (JMR, TH, JW, JS, JM, JC, CJ); School of Medicine and Dentistry, University of Rochester, Rochester, New York (SH, CP); Rochester Institute of Technology, Rochester, New York (TLP); Partners in Deaf Health Inc, Rochester, New York (JMR, SD-M, TLP)
| |
Collapse
|
22
|
Abstract
Implicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of ‘Lisa annoys Maya because…’) by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb’s thematic roles predict bias direction: stimulus-experiencer verbs (e.g., ‘annoy’), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., ‘love’) elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.
Collapse
|
23
|
Bailey N, Kaarto P, Burkey J, Bright D, Sohn M. Evaluation of an American Sign Language co-curricular training for pharmacy students. Curr Pharm Teach Learn 2021; 13:68-72. [PMID: 33131621 DOI: 10.1016/j.cptl.2020.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 07/29/2020] [Accepted: 08/10/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND PURPOSE With a prevalence of about one million d/Deaf and Hard of Hearing (HOH) patients that utilize American Sign Language (ASL) and the strong potential for communication barriers to adversely influence patient care outcomes, strategies must be developed to support health care professionals and students in learning to better care for d/Deaf and HOH patients. The primary objective of this project was to implement and assess a co-curricular course focused on helping student pharmacists become more confident and comfortable in communicating with d/Deaf and HOH patients. EDUCATIONAL ACTIVITY AND SETTING The co-curricular course (ASL for the Pharmacy Professional) consisted of four 90-min classes, each covering different words/phrases and Deaf cultural competence. Students were taught basic ASL including the alphabet, numbers, vocabulary, and sentence structure. Deaf culture and d/Deaf patient interaction was also covered. Students interacted with a Deaf physician over Skype and with d/Deaf and HOH individuals from the local community. FINDINGS Pre- and post-surveys that contained an identical set of questions were administered before and after course completion. Surveys assessed confidence and level of comfort of first- and second-professional year student pharmacists surrounding Deaf culture and interacting with d/Deaf and HOH patients. Following the course, students reported significantly improved confidence in communicating with d/Deaf patients directly and with a translator. SUMMARY Following completion of a co-curricular ASL course, doctor of pharmacy students perceived an increase in confidence in working with d/Deaf and HOH patients. Program logistics were simplified through collaboration with an outside entity.
Collapse
Affiliation(s)
- Nicole Bailey
- Ferris State University College of Pharmacy, 220 Ferris Dr., Big Rapids, MI 49307, United States.
| | - Patricia Kaarto
- Ferris State University College of Pharmacy, 220 Ferris Dr., Big Rapids, MI 49307, United States.
| | - Jessica Burkey
- Ferris State University College of Pharmacy, 220 Ferris Dr., Big Rapids, MI 49307, United States.
| | - David Bright
- Department of Pharmaceutical Sciences, Ferris State University College of Pharmacy, 220 Ferris Dr., Big Rapids, MI 49307, United States.
| | - Minji Sohn
- Department of Pharmaceutical Sciences, Ferris State University College of Pharmacy, 220 Ferris Dr., Big Rapids, MI 49307, United States.
| |
Collapse
|
24
|
Cheng Q, Mayberry RI. When event knowledge overrides word order in sentence comprehension: Learning a first language after childhood. Dev Sci 2020; 24:e13073. [PMID: 33296520 DOI: 10.1111/desc.13073] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/27/2020] [Accepted: 11/26/2020] [Indexed: 11/28/2022]
Abstract
Limited language experience in childhood is common among deaf individuals, which prior research has shown to lead to low levels of language processing. Although basic structures such as word order have been found to be resilient to conditions of sparse language input in early life, whether they are robust to conditions of extreme language delay is unknown. The sentence comprehension strategies of post-childhood, first-language (L1) learners of American Sign Language (ASL) with at least 9 years of language experience were investigated, in comparison to two control groups of learners with full access to language from birth (deaf native signers and hearing L2 learners who were native English speakers). The results of a sentence-to-picture matching experiment show that event knowledge overrides word order for post-childhood L1 learners, regardless of the animacy of the subject, while both deaf native signers and hearing L2 signers consistently rely on word order to comprehend sentences. Language inaccessibility throughout early childhood impedes the acquisition of even basic word order. Similar to the strategies used by very young children prior to the development of basic sentence structure, post-childhood L1 learners rely more on context and event knowledge to comprehend sentences. Language experience during childhood is critical to the development of basic sentence structure.
Collapse
Affiliation(s)
- Qi Cheng
- Department of Linguistics, University of Washington, Seattle, WA, USA.,University of California, San Diego, La Jolla, CA, USA
| | | |
Collapse
|
25
|
Lieberman AM, Borovsky A. Lexical Recognition in Deaf Children Learning American Sign Language: Activation of Semantic and Phonological Features of Signs. Lang Learn 2020; 70:935-973. [PMID: 33510545 PMCID: PMC7837603 DOI: 10.1111/lang.12409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.
Collapse
Affiliation(s)
- Amy M Lieberman
- Language and Literacy Department, Wheelock College of Education and Human Development, Boston University, 2 Silber Way, Boston, MA 02215
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2122
| |
Collapse
|
26
|
Anderson ML, Wolf Craig KS, Hostovsky S, Bligh M, Bramande E, Walker K, Biebel K, Byatt N. Creating the Capacity to Screen Deaf Women for Perinatal Depression: A Pilot Study. Midwifery 2020; 92:102867. [PMID: 33166783 DOI: 10.1016/j.midw.2020.102867] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 10/14/2020] [Accepted: 10/20/2020] [Indexed: 11/27/2022]
Abstract
OBJECTIVE Compared to hearing women, Deaf female sign language users receive sub-optimal maternal health care and report more dissatisfaction with their prenatal care experiences. As healthcare providers begin to regularly screen for perinatal depression, validated screening tools are not accessible to Deaf women due to severe disparities in English literacy and health literacy. DESIGN AND SETTING We conducted a one-year, community-engaged pilot study to create an initial American Sign Language (ASL) translation of the Edinburgh Postnatal Depression Scale (EPDS); conduct videophone screening interviews with Deaf perinatal women from across the United States; and perform preliminary statistical analyses of the resulting pilot data. PARTICIPANTS We enrolled 36 Deaf perinatal women between 5 weeks gestation up to one year postpartum. MEASUREMENTS AND FINDINGS Results supported the internal consistency of the full ASL EPDS, but did not provide evidence of internal consistency for the anxiety or depression subscales when presented in our ASL format. Participants reported a mean total score of 5.6 out of 30 points on the ASL EPDS (SD = 4.2). Thirty-one percent of participants reported scores in the mild depression range, six percent in the moderate range, and none in the severe range. KEY CONCLUSIONS AND IMPLICATIONS Limitations included small sample size, a restricted range of depression scores, non-normality of our distribution, and lack of a fully-standardized ASL EPDS administration due to our interview approach. Informed by study strengths, limitations, and lessons learned, future efforts will include a larger, more robust psychometric study to inform the development of a Computer-Assisted Self-Interviewing version of the ASL EPDS with automated scoring functions that hearing, non-signing medical providers can use to screen Deaf women for perinatal depression.
Collapse
Affiliation(s)
- Melissa L Anderson
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA.
| | - Kelly S Wolf Craig
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA
| | - Sheri Hostovsky
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA
| | - Maureen Bligh
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA
| | - Emily Bramande
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA; Department of Psychology, Gallaudet University, 800 Florida Avenue, NE, Washington, DC 20002, USA
| | - Kristin Walker
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA
| | - Kathleen Biebel
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA; Massachusetts Rehabilitation Commission, 600 Washington St, Boston, MA 02111, USA
| | - Nancy Byatt
- Implementation Science & Practice Advances Research Center (iSPARC), Department of Psychiatry, University of Massachusetts Medical School, 222 Maple Avenue, Chang Building, Shrewsbury, MA 01545, USA
| |
Collapse
|
27
|
McGarry ME, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Picture-naming in American Sign Language: an electrophysiological study of the effects of iconicity and structured alignment. Lang Cogn Neurosci 2020; 36:199-210. [PMID: 33732747 PMCID: PMC7959108 DOI: 10.1080/23273798.2020.1804601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 07/25/2020] [Indexed: 06/12/2023]
Abstract
A picture-naming task and ERPs were used to investigate effects of iconicity and visual alignment between signs and pictures in American Sign Language (ASL). For iconic signs, half the pictures visually overlapped with phonological features of the sign (e.g., the fingers of CAT align with a picture of a cat with prominent whiskers), while half did not (whiskers are not shown). Iconic signs were produced numerically faster than non-iconic signs and were associated with larger N400 amplitudes, akin to concreteness effects. Pictures aligned with iconic signs were named faster than non-aligned pictures, and there was a reduction in N400 amplitude. No behavioral effects were observed for the control group (English speakers). We conclude that sensory-motoric semantic features are represented more robustly for iconic than non-iconic signs (eliciting a concreteness-like N400 effect) and visual overlap between pictures and the phonological form of iconic signs facilitates lexical retrieval (eliciting a reduced N400).
Collapse
Affiliation(s)
- Meghan E. McGarry
- Joint Doctoral Program in Language and Communication Disorders, San Diego State University and University of California, San Diego, San Diego, CA USA
| | - Megan Mott
- Department of Psychology, San Diego State University, San Diego, CA USA
| | | | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA USA
| |
Collapse
|
28
|
Wienholz A, Lieberman AM. Semantic processing of adjectives and nouns in American Sign Language: effects of reference ambiguity and word order across development. J Cult Cogn Sci 2019; 3:217-34. [PMID: 32405616 DOI: 10.1007/s41809-019-00024-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
When processing spoken language sentences, listeners continuously make and revise predictions about the upcoming linguistic signal. In contrast, during comprehension of American Sign Language (ASL), signers must simultaneously attend to the unfolding linguistic signal and the surrounding scene via the visual modality. This may affect how signers activate potential lexical candidates and allocate visual attention as a sentence unfolds. To determine how signers resolve referential ambiguity during real-time comprehension of ASL adjectives and nouns, we presented deaf adults (n = 18, 19-61 years) and deaf children (n = 20, 4-8 years) with videos of ASL sentences in a visual world paradigm. Sentences had either an adjective-noun ("SEE YELLOW WHAT? FLOWER") or a noun-adjective ("SEE FLOWER WHICH? YELLOW") structure. The degree of ambiguity in the visual scene was manipulated at the adjective and noun levels (i.e., including one or more yellow items and one or more flowers in the visual array). We investigated effects of ambiguity and word order on target looking at early and late points in the sentence. Analysis revealed that adults and children made anticipatory looks to a target when it could be identified early in the sentence. Further, signers looked more to potential lexical candidates than to unrelated competitors in the early window, and more to matched than unrelated competitors in the late window. Children's gaze patterns largely aligned with those of adults with some divergence. Together, these findings suggest that signers allocate referential attention strategically based on the amount and type of ambiguity at different points in the sentence when processing adjectives and nouns in ASL.
Collapse
|
29
|
Lynn MA, Butcher E, Cuculick JA, Barnett S, Martina CA, Smith SR, Pollard RQ, Simpson-Haidaris PJ. A review of mentoring deaf and hard-of-hearing scholars. ACTA ACUST UNITED AC 2020; 28:211-228. [PMID: 32489313 DOI: 10.1080/13611267.2020.1749350] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Diversification of the scientific workforce usually focuses on recruitment and retention of women and underrepresented racial and ethnic minorities but often overlooks deaf and hard-of hearing (D/HH) persons. Usually classified as a disability group, such persons are often members of their own sociocultural linguistic minority and deserve unique support. For them, access to technical and social information is often hindered by communication- and/or language-centered barriers, but securing and using communication access services is just a start. Critical aspects of training D/HH scientists as part of a diversified workforce necessitates: (a) educating hearing persons in cross-cultural dynamics pertaining to deafness, sign language, and Deaf culture; (b) ensuring access to formal and incidental information to support development of professional soft skills; and (c) understanding that institutional infrastructure change may be necessary to ensure success. Mentorship and training programs that implement these criteria are now creating a new generation of D/HH scientists.
Collapse
Affiliation(s)
- Matthew A Lynn
- Department of Science and Mathematics, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY 14623
| | - Elizabeth Butcher
- Access Services, University of Rochester School of Medicine & Dentistry, Rochester, NY 14642
| | - Jessica A Cuculick
- Center on Cognition and Language, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY 14623
| | - Steven Barnett
- Departments of Family Medicine, Public Health Sciences and the National Center for Deaf Health Research, University of Rochester School of Medicine & Dentistry, Rochester, NY 14642
| | - Camille A Martina
- Departments of Public Health Sciences and Environmental Medicine, University of Rochester School of Medicine & Dentistry, Rochester, NY 14642
| | - Scott R Smith
- Office of the Associate Dean of Research, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY 14623
| | - Robert Q Pollard
- Office of the Associate Dean of Research, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY 14623.,Deaf Wellness Center, University of Rochester School of Medicine & Dentistry, Rochester NY, 14642
| | - Patricia J Simpson-Haidaris
- Departments of Medicine, Microbiology & Immunology and Pathology, University of Rochester School of Medicine & Dentistry, Rochester, NY 14642
| |
Collapse
|
30
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. Neurobiol Lang (Camb) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
Affiliation(s)
| | - Kurt Winsler
- Department of Psychology, University of California, Davis
| | | | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique
| | | |
Collapse
|
31
|
Lee B, Meade G, Midgley KJ, Holcomb PJ, Emmorey K. ERP Evidence for Co-Activation of English Words during Recognition of American Sign Language Signs. Brain Sci 2019; 9:E148. [PMID: 31234356 PMCID: PMC6627215 DOI: 10.3390/brainsci9060148] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 06/18/2019] [Accepted: 06/20/2019] [Indexed: 11/17/2022] Open
Abstract
Event-related potentials (ERPs) were used to investigate co-activation of English words during recognition of American Sign Language (ASL) signs. Deaf and hearing signers viewed pairs of ASL signs and judged their semantic relatedness. Half of the semantically unrelated signs had English translations that shared an orthographic and phonological rime (e.g., BAR-STAR) and half did not (e.g., NURSE-STAR). Classic N400 and behavioral semantic priming effects were observed in both groups. For hearing signers, targets in sign pairs with English rime translations elicited a smaller N400 compared to targets in pairs with unrelated English translations. In contrast, a reversed N400 effect was observed for deaf signers: target signs in English rime translation pairs elicited a larger N400 compared to targets in pairs with unrelated English translations. This reversed effect was overtaken by a later, more typical ERP priming effect for deaf signers who were aware of the manipulation. These findings provide evidence that implicit language co-activation in bimodal bilinguals is bidirectional. However, the distinct pattern of effects in deaf and hearing signers suggests that it may be modulated by differences in language proficiency and dominance as well as by asymmetric reliance on orthographic versus phonological representations.
Collapse
Affiliation(s)
- Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, CA 92182, USA.
| | - Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, CA 92182, USA.
| | | | | | - Karen Emmorey
- Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA.
| |
Collapse
|
32
|
Sehyr ZS, Emmorey K. The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: evidence from iconicity and transparency judgments. Lang Cogn 2019; 11:208-234. [PMID: 31798755 PMCID: PMC6886719 DOI: 10.1017/langcog.2019.18] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), 'perceived transparency' (transparency ratings of the guesses), and 'semantic potential' (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers' ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.
Collapse
|
33
|
Frederiksen AT, Mayberry RI. Reference tracking in early stages of different modality L2 acquisition: Limited over-explicitness in novice ASL signers' referring expressions. Second Lang Res 2019; 35:253-283. [PMID: 31656363 PMCID: PMC6814168 DOI: 10.1177/0267658317750220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Previous research on reference tracking has revealed a tendency towards over-explicitness in second language (L2) learners. Only limited evidence exists that this trend extends to situations where the learner's first and second languages do not share a sensory-motor modality. Using a story-telling paradigm, this study examined how hearing novice L2 learners accomplish reference tracking in American Sign Language (ASL), and whether they transfer strategies from gesture. Our results revealed limited evidence of over-explicitness. Instead there was an overall similarity in the L2 learners' reference tracking to that of a native signer control group, even in the use of lexical nominals, pronouns and zero anaphora - areas where research on spoken L2 reference tracking predicts differences. Our data also revealed, however, that L2 learners have problems with the referential value of ASL classifiers, and with target-like use of zero anaphora from different verb types, as well as spatial modification. This suggests that over-explicitness occurs in the early stages of different modality L2 acquisition to a limited extent. We found no evidence of gestural transfer. Finally, we found that L2 learners reintroduce more than native signers, which could indicate that they, unlike native signers are not yet capable of utilizing the affordances of the visual modality to reference multiple entities simultaneously.
Collapse
|
34
|
Abstract
Previous studies suggest that age of acquisition affects the outcomes of learning, especially at the morphosyntactic level. Unknown is how syntactic development is affected by increased cognitive maturity and delayed language onset. The current paper studied the early syntactic development of adolescent first language learners by examining word order patterns in American Sign Language (ASL). ASL uses a basic Subject-Verb-Object order, but also employs multiple word order variations. Child learners produce variable word order at the initial stage of acquisition, but later primarily produce canonical word order. We asked whether adolescent first language learners acquire ASL word order in a fashion parallel to child learners. We analyzed word order preference in spontaneous language samples from four adolescent L1 learners collected longitudinally from 12 months to six years of ASL exposure. Our results suggest that adolescent L1 learners go through stages similar to child native learners, although this process also appears to be prolonged.
Collapse
Affiliation(s)
- Qi Cheng
- Department of Linguistics,University of California San Diego,USA
| | | |
Collapse
|
35
|
Chong TW, Lee BG. American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach. Sensors (Basel) 2018; 18:E3554. [PMID: 30347776 DOI: 10.3390/s18103554] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Revised: 10/16/2018] [Accepted: 10/17/2018] [Indexed: 11/26/2022]
Abstract
Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.
Collapse
|
36
|
Giustolisi B, Emmorey K. Visual Statistical Learning With Stimuli Presented Sequentially Across Space and Time in Deaf and Hearing Adults. Cogn Sci 2018; 42:3177-3190. [PMID: 30320454 DOI: 10.1111/cogs.12691] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 09/05/2018] [Accepted: 09/10/2018] [Indexed: 11/27/2022]
Abstract
This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non-signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess whether VSL was associated with reading ability in deaf individuals, and whether this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential processing tasks. For the VSL task, we adopted a modified version of the triplet learning paradigm, with stimuli presented sequentially across space and time. Results revealed that measures of sign language skill (sentence comprehension/repetition) did not correlate with VSL scores, possibly due to the sequential nature of our VSL task. Reading comprehension scores (PIAT-R) were a significant predictor of VSL accuracy in hearing but not deaf people. This finding might be due to the sequential nature of the VSL task and to a less salient role of the sequential orthography-to-phonology mapping in deaf readers compared to hearing readers. The two groups did not differ in VSL scores. However, when reading ability was taken into account, VSL scores were higher for the deaf group than the hearing group. Overall, this evidence is inconsistent with the Auditory Scaffolding Hypothesis, suggesting that humans can develop efficient sequencing abilities even in the absence of sound.
Collapse
Affiliation(s)
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
37
|
Perlman M, Little H, Thompson B, Thompson RL. Iconicity in Signed and Spoken Vocabulary: A Comparison Between American Sign Language, British Sign Language, English, and Spanish. Front Psychol 2018; 9:1433. [PMID: 30154747 PMCID: PMC6102584 DOI: 10.3389/fpsyg.2018.01433] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 07/23/2018] [Indexed: 11/23/2022] Open
Abstract
Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages - American Sign Language and British Sign Language, and two spoken languages - English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages.
Collapse
Affiliation(s)
- Marcus Perlman
- Department of English Language and Applied Linguistic, University of Birmingham, Birmingham, United Kingdom
| | - Hannah Little
- Department of Applied Sciences, University of the West of England, Bristol, United Kingdom
| | - Bill Thompson
- Language and Cognition Department, Max Planck Institute of Psycholinguistics, Nijmegen, Netherlands
| | - Robin L. Thompson
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
38
|
Hubbard LJ, D'Andrea E, Carman LA. Promoting Best Practice for Perinatal Care of Deaf Women. Nurs Womens Health 2018; 22:126-136. [PMID: 29628052 DOI: 10.1016/j.nwh.2018.02.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Revised: 10/16/2017] [Indexed: 06/08/2023]
Abstract
To evaluate perinatal nursing care for Deaf women, we conducted a pilot, descriptive study exploring women's prenatal, labor, and postpartum experiences. We used the Quality and Safety Education for Nurses (QSEN) framework to analyze women's responses and to explore implications for practice. Themes and women's stories are presented within the QSEN structure to promote informed and individualized perinatal nursing care for Deaf families. It is essential for nurses to stay abreast of resources and technological advances and to use culturally competent principles of communication. Nurses' knowledge of Deaf culture helps guide care, and their understanding of legal provisions and the Americans with Disabilities Act can lead to greater advocacy for Deaf women. Additional research is necessary to fill the current void in the literature about perinatal care for Deaf women.
Collapse
|
39
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. Lang Cogn Neurosci 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
40
|
Kushalnagar P, Smith S, Hopper M, Ryan C, Rinkevich M, Kushalnagar R. Making Cancer Health Text on the Internet Easier to Read for Deaf People Who Use American Sign Language. J Cancer Educ 2018; 33:134-140. [PMID: 27271268 PMCID: PMC5145779 DOI: 10.1007/s13187-016-1059-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.
Collapse
Affiliation(s)
| | - Scott Smith
- Rochester Institute of Technology, Rochester, NY, USA
| | | | - Claire Ryan
- University of Texas at Austin, Austin, TX, USA
| | | | | |
Collapse
|
41
|
Boudreault P, Wolfson A, Berman B, Venne VL, Sinsheimer JS, Palmer C. Bilingual Cancer Genetic Education Modules for the Deaf Community: Development and Evaluation of the Online Video Material. J Genet Couns 2018; 27:457-69. [PMID: 29260487 DOI: 10.1007/s10897-017-0188-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 11/27/2017] [Indexed: 10/18/2022]
Abstract
Health information about inherited forms of cancer and the role of family history in cancer risk for the American Sign Language (ASL) Deaf community, a linguistic and cultural community, needs improvement. Cancer genetic education materials available in English print format are not accessible for many sign language users because English is not their native or primary language. Per Center for Disease Control and Prevention recommendations, the level of literacy for printed health education materials should not be higher than 6th grade level (~ 11 to 12 years old), and even with this recommendation, printed materials are still not accessible to sign language users or other nonnative English speakers. Genetic counseling is becoming an integral part of healthcare, but often ASL users are not considered when health education materials are developed. As a result, there are few genetic counseling materials available in ASL. Online tools such as video and closed captioning offer opportunities for educators and genetic counselors to provide digital access to genetic information in ASL to the Deaf community. The Deaf Genetics Project team used a bilingual approach to develop a 37-min interactive Cancer Genetics Education Module (CGEM) video in ASL with closed captions and quizzes, and demonstrated that this approach resulted in greater cancer genetic knowledge and increased intentions to obtain counseling or testing, compared to standard English text information (Palmer et al., Disability and Health Journal, 10(1):23-32, 2017). Though visually enhanced educational materials have been developed for sign language users with multimodal/lingual approach, little is known about design features that can accommodate a diverse audience of sign language users so the material is engaging to a wide audience. The main objectives of this paper are to describe the development of the CGEM and to determine if viewer demographic characteristics are associated with two measurable aspects of CGEM viewing behavior: (1) length of time spent viewing and (2) number of pause, play, and seek events. These objectives are important to address, especially for Deaf individuals because the amount of simultaneous content (video, print) requires cross-modal cognitive processing of visual and textual materials. The use of technology and presentational strategies is needed that enhance and not interfere with health learning in this population.
Collapse
|
42
|
Lieberman AM, Borovsky A, Mayberry RI. Prediction in a visual language: real-time sentence processing in American Sign Language across development. Lang Cogn Neurosci 2017; 33:387-401. [PMID: 29687014 PMCID: PMC5909983 DOI: 10.1080/23273798.2017.1411961] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 11/17/2017] [Indexed: 06/08/2023]
Abstract
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
Collapse
Affiliation(s)
- Amy M Lieberman
- Boston University, School of Education, 2 Silber Way, Boston, MA 02215, ,
| | - Arielle Borovsky
- Purdue University, Speech, Language, and Hearing Sciences, 715 Clinic Dr, West Lafayette, IN 47907, ,
| | - Rachel I Mayberry
- University of California, San Diego, Department of Linguistics, 9500 Gilman Drive, #0108, La Jolla, CA 92093-0108, ,
| |
Collapse
|
43
|
Kushalnagar P, Harris R, Paludneviciene R, Hoglind T. Health Information National Trends Survey in American Sign Language (HINTS-ASL): Protocol for the Cultural Adaptation and Linguistic Validation of a National Survey. JMIR Res Protoc 2017; 6:e172. [PMID: 28903891 PMCID: PMC5617902 DOI: 10.2196/resprot.8067] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Revised: 07/05/2017] [Accepted: 08/09/2017] [Indexed: 12/01/2022] Open
Abstract
Background The Health Information National Trends Survey (HINTS) collects nationally representative data about the American’s public use of health-related information. This survey is available in English and Spanish, but not in American Sign Language (ASL). Thus, the exclusion of ASL users from these national health information survey studies has led to a significant gap in knowledge of Internet usage for health information access in this underserved and understudied population. Objective The objectives of this study are (1) to culturally adapt and linguistically translate the HINTS items to ASL (HINTS-ASL); and (2) to gather information about deaf people’s health information seeking behaviors across technology-mediated platforms. Methods We modified the standard procedures developed at the US National Center for Health Statistics Cognitive Survey Laboratory to culturally adapt and translate HINTS items to ASL. Cognitive interviews were conducted to assess clarity and delivery of these HINTS-ASL items. Final ASL video items were uploaded to a protected online survey website. The HINTS-ASL online survey has been administered to over 1350 deaf adults (ages 18 to 90 and up) who use ASL. Data collection is ongoing and includes deaf adult signers across the United States. Results Some items from HINTS item bank required cultural adaptation for use with deaf people who use accessible services or technology. A separate item bank for deaf-related experiences was created, reflecting deaf-specific technology such as sharing health-related ASL videos through social network sites and using video remote interpreting services in health settings. After data collection is complete, we will conduct a series of analyses on deaf people’s health information seeking behaviors across technology-mediated platforms. Conclusions HINTS-ASL is an accessible health information national trends survey, which includes a culturally appropriate set of items that are relevant to the experiences of deaf people who use ASL. The final HINTS-ASL product will be available for public use upon completion of this study.
Collapse
Affiliation(s)
- Poorna Kushalnagar
- Deaf Health Communication and Quality of Life Center, Department of Psychology, Gallaudet University, Washington, DC, United States
| | - Raychelle Harris
- Department of American Sign Language and Deaf Studies, Gallaudet University, Washington, DC, United States
| | | | - TraciAnn Hoglind
- Deaf Health Communication and Quality of Life Center, Gallaudet University, Washington, DC, United States
| |
Collapse
|
44
|
Stokar H. Deaf Workers in Restaurant, Retail, and Hospitality Sector Employment: Harnessing Research to Promote Advocacy. ACTA ACUST UNITED AC 2017; 16:204-215. [PMID: 28876218 DOI: 10.1080/1536710x.2017.1372237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
A quarter-century after the passage of the Americans with Disabilities Act (ADA, 1990 ), workplace accommodation is still a struggle for deaf employees and their managers. Many challenges are the result of communication barriers that can be overcome through much needed-although often absent-advocacy and training. This article highlights the literature on the employment of deaf individuals in the United States service industries of food service, retail, and hospitality conducted from 2000 to 2016. Exploring dimensions of both hiring and active workplace accommodation, suggestions are made for how social work advocates can harness information and strengthen their approaches for educating managers and supporting workers.
Collapse
Affiliation(s)
- Hayley Stokar
- a Department of Social Work , Purdue University Northwest , Westville , Indiana , USA
| |
Collapse
|
45
|
Meade G, Midgley KJ, Sevcikova Sehyr Z, Holcomb PJ, Emmorey K. Implicit co-activation of American Sign Language in deaf readers: An ERP study. Brain Lang 2017; 170:50-61. [PMID: 28407510 PMCID: PMC5538318 DOI: 10.1016/j.bandl.2017.03.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 01/21/2017] [Accepted: 03/16/2017] [Indexed: 05/12/2023]
Abstract
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, USA.
| | | | - Zed Sevcikova Sehyr
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| | | | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| |
Collapse
|
46
|
Yates L, Dreany-Pyles L. Addiction Treatment with Deaf and Hard of Hearing People: An Application of the CENAPS Model. J Soc Work Disabil Rehabil 2017; 16:298-320. [PMID: 28976292 DOI: 10.1080/1536710x.2017.1372243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Alcohol and drug addiction is a significant problem among deaf and hard of hearing people. Looking through a Deaf culture lens, treatment for alcohol and drug addiction is key for providing care for deaf and hard of hearing clients. Using the CENAPS model, an applied cognitive-behavioral therapy program is recommended for addiction treatment. The CENAPS model provides clinicians with tools for stabilizing deaf and hard of hearing clients, supporting their transition to early recovery. Educating the client about the stages of relapse and the stages of recovery, clinicians using this model can better treat and prepare deaf and hard of hearing clients for long-term recovery.
Collapse
Affiliation(s)
- Leo Yates
- a Deaf Addiction Services of Maryland , Baltimore , Maryland , USA
| | - Laura Dreany-Pyles
- b Department of Social Work , Gallaudet University , Washington , DC , USA
| |
Collapse
|
47
|
Abstract
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
Collapse
Affiliation(s)
- Teresa V Mitchell
- Eunice Kennedy Shriver Center, University of Massachusetts Medical School, Worcester, MA, USA; Brandeis University, Waltham, MA, USA.
| |
Collapse
|
48
|
Abstract
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature - unpunctuated repetition - in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).
Collapse
Affiliation(s)
| | | | - M. Coppola
- University of Connecticut, Storrs, CT, 06269, USA
| | - A. Senghas
- Barnard College, New York, NY, 10027, USA
| | - D. Brentari
- University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
49
|
Newman AJ, Supalla T, Fernandez N, Newport EL, Bavelier D. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proc Natl Acad Sci U S A 2015; 112:11684-9. [PMID: 26283352 DOI: 10.1073/pnas.1510527112] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Collapse
|
50
|
Weisberg J, McCullough S, Emmorey K. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends. Brain Lang 2015; 147:96-106. [PMID: 26177161 PMCID: PMC5769874 DOI: 10.1016/j.bandl.2015.05.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 04/17/2015] [Accepted: 05/16/2015] [Indexed: 05/29/2023]
Abstract
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| |
Collapse
|