26
|
Anderson ML, Wolf Craig KS, Hostovsky S, Bligh M, Bramande E, Walker K, Biebel K, Byatt N. Creating the Capacity to Screen Deaf Women for Perinatal Depression: A Pilot Study. Midwifery 2020; 92:102867. [PMID: 33166783 DOI: 10.1016/j.midw.2020.102867] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 10/14/2020] [Accepted: 10/20/2020] [Indexed: 11/27/2022]
Abstract
OBJECTIVE Compared to hearing women, Deaf female sign language users receive sub-optimal maternal health care and report more dissatisfaction with their prenatal care experiences. As healthcare providers begin to regularly screen for perinatal depression, validated screening tools are not accessible to Deaf women due to severe disparities in English literacy and health literacy. DESIGN AND SETTING We conducted a one-year, community-engaged pilot study to create an initial American Sign Language (ASL) translation of the Edinburgh Postnatal Depression Scale (EPDS); conduct videophone screening interviews with Deaf perinatal women from across the United States; and perform preliminary statistical analyses of the resulting pilot data. PARTICIPANTS We enrolled 36 Deaf perinatal women between 5 weeks gestation up to one year postpartum. MEASUREMENTS AND FINDINGS Results supported the internal consistency of the full ASL EPDS, but did not provide evidence of internal consistency for the anxiety or depression subscales when presented in our ASL format. Participants reported a mean total score of 5.6 out of 30 points on the ASL EPDS (SD = 4.2). Thirty-one percent of participants reported scores in the mild depression range, six percent in the moderate range, and none in the severe range. KEY CONCLUSIONS AND IMPLICATIONS Limitations included small sample size, a restricted range of depression scores, non-normality of our distribution, and lack of a fully-standardized ASL EPDS administration due to our interview approach. Informed by study strengths, limitations, and lessons learned, future efforts will include a larger, more robust psychometric study to inform the development of a Computer-Assisted Self-Interviewing version of the ASL EPDS with automated scoring functions that hearing, non-signing medical providers can use to screen Deaf women for perinatal depression.
Collapse
|
27
|
McGarry ME, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Picture-naming in American Sign Language: an electrophysiological study of the effects of iconicity and structured alignment. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:199-210. [PMID: 33732747 PMCID: PMC7959108 DOI: 10.1080/23273798.2020.1804601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 07/25/2020] [Indexed: 06/12/2023]
Abstract
A picture-naming task and ERPs were used to investigate effects of iconicity and visual alignment between signs and pictures in American Sign Language (ASL). For iconic signs, half the pictures visually overlapped with phonological features of the sign (e.g., the fingers of CAT align with a picture of a cat with prominent whiskers), while half did not (whiskers are not shown). Iconic signs were produced numerically faster than non-iconic signs and were associated with larger N400 amplitudes, akin to concreteness effects. Pictures aligned with iconic signs were named faster than non-aligned pictures, and there was a reduction in N400 amplitude. No behavioral effects were observed for the control group (English speakers). We conclude that sensory-motoric semantic features are represented more robustly for iconic than non-iconic signs (eliciting a concreteness-like N400 effect) and visual overlap between pictures and the phonological form of iconic signs facilitates lexical retrieval (eliciting a reduced N400).
Collapse
|
28
|
Semantic processing of adjectives and nouns in American Sign Language: effects of reference ambiguity and word order across development. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020; 3:217-234. [PMID: 32405616 DOI: 10.1007/s41809-019-00024-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
When processing spoken language sentences, listeners continuously make and revise predictions about the upcoming linguistic signal. In contrast, during comprehension of American Sign Language (ASL), signers must simultaneously attend to the unfolding linguistic signal and the surrounding scene via the visual modality. This may affect how signers activate potential lexical candidates and allocate visual attention as a sentence unfolds. To determine how signers resolve referential ambiguity during real-time comprehension of ASL adjectives and nouns, we presented deaf adults (n = 18, 19-61 years) and deaf children (n = 20, 4-8 years) with videos of ASL sentences in a visual world paradigm. Sentences had either an adjective-noun ("SEE YELLOW WHAT? FLOWER") or a noun-adjective ("SEE FLOWER WHICH? YELLOW") structure. The degree of ambiguity in the visual scene was manipulated at the adjective and noun levels (i.e., including one or more yellow items and one or more flowers in the visual array). We investigated effects of ambiguity and word order on target looking at early and late points in the sentence. Analysis revealed that adults and children made anticipatory looks to a target when it could be identified early in the sentence. Further, signers looked more to potential lexical candidates than to unrelated competitors in the early window, and more to matched than unrelated competitors in the late window. Children's gaze patterns largely aligned with those of adults with some divergence. Together, these findings suggest that signers allocate referential attention strategically based on the amount and type of ambiguity at different points in the sentence when processing adjectives and nouns in ASL.
Collapse
|
29
|
Lynn MA, Butcher E, Cuculick JA, Barnett S, Martina CA, Smith SR, Pollard RQ, Simpson-Haidaris PJ. A review of mentoring deaf and hard-of-hearing scholars. ACTA ACUST UNITED AC 2020; 28:211-228. [PMID: 32489313 DOI: 10.1080/13611267.2020.1749350] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Diversification of the scientific workforce usually focuses on recruitment and retention of women and underrepresented racial and ethnic minorities but often overlooks deaf and hard-of hearing (D/HH) persons. Usually classified as a disability group, such persons are often members of their own sociocultural linguistic minority and deserve unique support. For them, access to technical and social information is often hindered by communication- and/or language-centered barriers, but securing and using communication access services is just a start. Critical aspects of training D/HH scientists as part of a diversified workforce necessitates: (a) educating hearing persons in cross-cultural dynamics pertaining to deafness, sign language, and Deaf culture; (b) ensuring access to formal and incidental information to support development of professional soft skills; and (c) understanding that institutional infrastructure change may be necessary to ensure success. Mentorship and training programs that implement these criteria are now creating a new generation of D/HH scientists.
Collapse
|
30
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
|
31
|
Lee B, Meade G, Midgley KJ, Holcomb PJ, Emmorey K. ERP Evidence for Co-Activation of English Words during Recognition of American Sign Language Signs. Brain Sci 2019; 9:E148. [PMID: 31234356 PMCID: PMC6627215 DOI: 10.3390/brainsci9060148] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 06/18/2019] [Accepted: 06/20/2019] [Indexed: 11/17/2022] Open
Abstract
Event-related potentials (ERPs) were used to investigate co-activation of English words during recognition of American Sign Language (ASL) signs. Deaf and hearing signers viewed pairs of ASL signs and judged their semantic relatedness. Half of the semantically unrelated signs had English translations that shared an orthographic and phonological rime (e.g., BAR-STAR) and half did not (e.g., NURSE-STAR). Classic N400 and behavioral semantic priming effects were observed in both groups. For hearing signers, targets in sign pairs with English rime translations elicited a smaller N400 compared to targets in pairs with unrelated English translations. In contrast, a reversed N400 effect was observed for deaf signers: target signs in English rime translation pairs elicited a larger N400 compared to targets in pairs with unrelated English translations. This reversed effect was overtaken by a later, more typical ERP priming effect for deaf signers who were aware of the manipulation. These findings provide evidence that implicit language co-activation in bimodal bilinguals is bidirectional. However, the distinct pattern of effects in deaf and hearing signers suggests that it may be modulated by differences in language proficiency and dominance as well as by asymmetric reliance on orthographic versus phonological representations.
Collapse
|
32
|
Sehyr ZS, Emmorey K. The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: evidence from iconicity and transparency judgments. LANGUAGE AND COGNITION 2019; 11:208-234. [PMID: 31798755 PMCID: PMC6886719 DOI: 10.1017/langcog.2019.18] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), 'perceived transparency' (transparency ratings of the guesses), and 'semantic potential' (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers' ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.
Collapse
|
33
|
Frederiksen AT, Mayberry RI. Reference tracking in early stages of different modality L2 acquisition: Limited over-explicitness in novice ASL signers' referring expressions. SECOND LANGUAGE RESEARCH 2019; 35:253-283. [PMID: 31656363 PMCID: PMC6814168 DOI: 10.1177/0267658317750220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Previous research on reference tracking has revealed a tendency towards over-explicitness in second language (L2) learners. Only limited evidence exists that this trend extends to situations where the learner's first and second languages do not share a sensory-motor modality. Using a story-telling paradigm, this study examined how hearing novice L2 learners accomplish reference tracking in American Sign Language (ASL), and whether they transfer strategies from gesture. Our results revealed limited evidence of over-explicitness. Instead there was an overall similarity in the L2 learners' reference tracking to that of a native signer control group, even in the use of lexical nominals, pronouns and zero anaphora - areas where research on spoken L2 reference tracking predicts differences. Our data also revealed, however, that L2 learners have problems with the referential value of ASL classifiers, and with target-like use of zero anaphora from different verb types, as well as spatial modification. This suggests that over-explicitness occurs in the early stages of different modality L2 acquisition to a limited extent. We found no evidence of gestural transfer. Finally, we found that L2 learners reintroduce more than native signers, which could indicate that they, unlike native signers are not yet capable of utilizing the affordances of the visual modality to reference multiple entities simultaneously.
Collapse
|
34
|
Cheng Q, Mayberry RI. Acquiring a first language in adolescence: the case of basic word order in American Sign Language. JOURNAL OF CHILD LANGUAGE 2019; 46:214-240. [PMID: 30326985 PMCID: PMC6370511 DOI: 10.1017/s0305000918000417] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Previous studies suggest that age of acquisition affects the outcomes of learning, especially at the morphosyntactic level. Unknown is how syntactic development is affected by increased cognitive maturity and delayed language onset. The current paper studied the early syntactic development of adolescent first language learners by examining word order patterns in American Sign Language (ASL). ASL uses a basic Subject-Verb-Object order, but also employs multiple word order variations. Child learners produce variable word order at the initial stage of acquisition, but later primarily produce canonical word order. We asked whether adolescent first language learners acquire ASL word order in a fashion parallel to child learners. We analyzed word order preference in spontaneous language samples from four adolescent L1 learners collected longitudinally from 12 months to six years of ASL exposure. Our results suggest that adolescent L1 learners go through stages similar to child native learners, although this process also appears to be prolonged.
Collapse
|
35
|
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach. SENSORS 2018; 18:s18103554. [PMID: 30347776 PMCID: PMC6210690 DOI: 10.3390/s18103554] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Revised: 10/16/2018] [Accepted: 10/17/2018] [Indexed: 11/26/2022]
Abstract
Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.
Collapse
|
36
|
Giustolisi B, Emmorey K. Visual Statistical Learning With Stimuli Presented Sequentially Across Space and Time in Deaf and Hearing Adults. Cogn Sci 2018; 42:3177-3190. [PMID: 30320454 DOI: 10.1111/cogs.12691] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 09/05/2018] [Accepted: 09/10/2018] [Indexed: 11/27/2022]
Abstract
This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non-signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess whether VSL was associated with reading ability in deaf individuals, and whether this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential processing tasks. For the VSL task, we adopted a modified version of the triplet learning paradigm, with stimuli presented sequentially across space and time. Results revealed that measures of sign language skill (sentence comprehension/repetition) did not correlate with VSL scores, possibly due to the sequential nature of our VSL task. Reading comprehension scores (PIAT-R) were a significant predictor of VSL accuracy in hearing but not deaf people. This finding might be due to the sequential nature of the VSL task and to a less salient role of the sequential orthography-to-phonology mapping in deaf readers compared to hearing readers. The two groups did not differ in VSL scores. However, when reading ability was taken into account, VSL scores were higher for the deaf group than the hearing group. Overall, this evidence is inconsistent with the Auditory Scaffolding Hypothesis, suggesting that humans can develop efficient sequencing abilities even in the absence of sound.
Collapse
|
37
|
Perlman M, Little H, Thompson B, Thompson RL. Iconicity in Signed and Spoken Vocabulary: A Comparison Between American Sign Language, British Sign Language, English, and Spanish. Front Psychol 2018; 9:1433. [PMID: 30154747 PMCID: PMC6102584 DOI: 10.3389/fpsyg.2018.01433] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 07/23/2018] [Indexed: 11/23/2022] Open
Abstract
Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages - American Sign Language and British Sign Language, and two spoken languages - English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages.
Collapse
|
38
|
Hubbard LJ, D'Andrea E, Carman LA. Promoting Best Practice for Perinatal Care of Deaf Women. Nurs Womens Health 2018; 22:126-136. [PMID: 29628052 DOI: 10.1016/j.nwh.2018.02.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Revised: 10/16/2017] [Indexed: 06/08/2023]
Abstract
To evaluate perinatal nursing care for Deaf women, we conducted a pilot, descriptive study exploring women's prenatal, labor, and postpartum experiences. We used the Quality and Safety Education for Nurses (QSEN) framework to analyze women's responses and to explore implications for practice. Themes and women's stories are presented within the QSEN structure to promote informed and individualized perinatal nursing care for Deaf families. It is essential for nurses to stay abreast of resources and technological advances and to use culturally competent principles of communication. Nurses' knowledge of Deaf culture helps guide care, and their understanding of legal provisions and the Americans with Disabilities Act can lead to greater advocacy for Deaf women. Additional research is necessary to fill the current void in the literature about perinatal care for Deaf women.
Collapse
|
39
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
|
40
|
Kushalnagar P, Smith S, Hopper M, Ryan C, Rinkevich M, Kushalnagar R. Making Cancer Health Text on the Internet Easier to Read for Deaf People Who Use American Sign Language. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2018; 33:134-140. [PMID: 27271268 PMCID: PMC5145779 DOI: 10.1007/s13187-016-1059-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.
Collapse
|
41
|
Bilingual Cancer Genetic Education Modules for the Deaf Community: Development and Evaluation of the Online Video Material. J Genet Couns 2017; 27:457-469. [PMID: 29260487 DOI: 10.1007/s10897-017-0188-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 11/27/2017] [Indexed: 10/18/2022]
Abstract
Health information about inherited forms of cancer and the role of family history in cancer risk for the American Sign Language (ASL) Deaf community, a linguistic and cultural community, needs improvement. Cancer genetic education materials available in English print format are not accessible for many sign language users because English is not their native or primary language. Per Center for Disease Control and Prevention recommendations, the level of literacy for printed health education materials should not be higher than 6th grade level (~ 11 to 12 years old), and even with this recommendation, printed materials are still not accessible to sign language users or other nonnative English speakers. Genetic counseling is becoming an integral part of healthcare, but often ASL users are not considered when health education materials are developed. As a result, there are few genetic counseling materials available in ASL. Online tools such as video and closed captioning offer opportunities for educators and genetic counselors to provide digital access to genetic information in ASL to the Deaf community. The Deaf Genetics Project team used a bilingual approach to develop a 37-min interactive Cancer Genetics Education Module (CGEM) video in ASL with closed captions and quizzes, and demonstrated that this approach resulted in greater cancer genetic knowledge and increased intentions to obtain counseling or testing, compared to standard English text information (Palmer et al., Disability and Health Journal, 10(1):23-32, 2017). Though visually enhanced educational materials have been developed for sign language users with multimodal/lingual approach, little is known about design features that can accommodate a diverse audience of sign language users so the material is engaging to a wide audience. The main objectives of this paper are to describe the development of the CGEM and to determine if viewer demographic characteristics are associated with two measurable aspects of CGEM viewing behavior: (1) length of time spent viewing and (2) number of pause, play, and seek events. These objectives are important to address, especially for Deaf individuals because the amount of simultaneous content (video, print) requires cross-modal cognitive processing of visual and textual materials. The use of technology and presentational strategies is needed that enhance and not interfere with health learning in this population.
Collapse
|
42
|
Lieberman AM, Borovsky A, Mayberry RI. Prediction in a visual language: real-time sentence processing in American Sign Language across development. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 33:387-401. [PMID: 29687014 PMCID: PMC5909983 DOI: 10.1080/23273798.2017.1411961] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 11/17/2017] [Indexed: 06/08/2023]
Abstract
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
Collapse
|
43
|
Kushalnagar P, Harris R, Paludneviciene R, Hoglind T. Health Information National Trends Survey in American Sign Language (HINTS-ASL): Protocol for the Cultural Adaptation and Linguistic Validation of a National Survey. JMIR Res Protoc 2017; 6:e172. [PMID: 28903891 PMCID: PMC5617902 DOI: 10.2196/resprot.8067] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Revised: 07/05/2017] [Accepted: 08/09/2017] [Indexed: 12/01/2022] Open
Abstract
Background The Health Information National Trends Survey (HINTS) collects nationally representative data about the American’s public use of health-related information. This survey is available in English and Spanish, but not in American Sign Language (ASL). Thus, the exclusion of ASL users from these national health information survey studies has led to a significant gap in knowledge of Internet usage for health information access in this underserved and understudied population. Objective The objectives of this study are (1) to culturally adapt and linguistically translate the HINTS items to ASL (HINTS-ASL); and (2) to gather information about deaf people’s health information seeking behaviors across technology-mediated platforms. Methods We modified the standard procedures developed at the US National Center for Health Statistics Cognitive Survey Laboratory to culturally adapt and translate HINTS items to ASL. Cognitive interviews were conducted to assess clarity and delivery of these HINTS-ASL items. Final ASL video items were uploaded to a protected online survey website. The HINTS-ASL online survey has been administered to over 1350 deaf adults (ages 18 to 90 and up) who use ASL. Data collection is ongoing and includes deaf adult signers across the United States. Results Some items from HINTS item bank required cultural adaptation for use with deaf people who use accessible services or technology. A separate item bank for deaf-related experiences was created, reflecting deaf-specific technology such as sharing health-related ASL videos through social network sites and using video remote interpreting services in health settings. After data collection is complete, we will conduct a series of analyses on deaf people’s health information seeking behaviors across technology-mediated platforms. Conclusions HINTS-ASL is an accessible health information national trends survey, which includes a culturally appropriate set of items that are relevant to the experiences of deaf people who use ASL. The final HINTS-ASL product will be available for public use upon completion of this study.
Collapse
|
44
|
Stokar H. Deaf Workers in Restaurant, Retail, and Hospitality Sector Employment: Harnessing Research to Promote Advocacy. ACTA ACUST UNITED AC 2017; 16:204-215. [PMID: 28876218 DOI: 10.1080/1536710x.2017.1372237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
A quarter-century after the passage of the Americans with Disabilities Act (ADA, 1990 ), workplace accommodation is still a struggle for deaf employees and their managers. Many challenges are the result of communication barriers that can be overcome through much needed-although often absent-advocacy and training. This article highlights the literature on the employment of deaf individuals in the United States service industries of food service, retail, and hospitality conducted from 2000 to 2016. Exploring dimensions of both hiring and active workplace accommodation, suggestions are made for how social work advocates can harness information and strengthen their approaches for educating managers and supporting workers.
Collapse
|
45
|
Meade G, Midgley KJ, Sevcikova Sehyr Z, Holcomb PJ, Emmorey K. Implicit co-activation of American Sign Language in deaf readers: An ERP study. BRAIN AND LANGUAGE 2017; 170:50-61. [PMID: 28407510 PMCID: PMC5538318 DOI: 10.1016/j.bandl.2017.03.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 01/21/2017] [Accepted: 03/16/2017] [Indexed: 05/12/2023]
Abstract
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals.
Collapse
|
46
|
Yates L, Dreany-Pyles L. Addiction Treatment with Deaf and Hard of Hearing People: An Application of the CENAPS Model. JOURNAL OF SOCIAL WORK IN DISABILITY & REHABILITATION 2017; 16:298-320. [PMID: 28976292 DOI: 10.1080/1536710x.2017.1372243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Alcohol and drug addiction is a significant problem among deaf and hard of hearing people. Looking through a Deaf culture lens, treatment for alcohol and drug addiction is key for providing care for deaf and hard of hearing clients. Using the CENAPS model, an applied cognitive-behavioral therapy program is recommended for addiction treatment. The CENAPS model provides clinicians with tools for stabilizing deaf and hard of hearing clients, supporting their transition to early recovery. Educating the client about the stages of relapse and the stages of recovery, clinicians using this model can better treat and prepare deaf and hard of hearing clients for long-term recovery.
Collapse
|
47
|
Mitchell TV. Category selectivity of the N170 and the role of expertise in deaf signers. Hear Res 2016; 343:150-161. [PMID: 27770622 DOI: 10.1016/j.heares.2016.10.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 10/07/2016] [Accepted: 10/15/2016] [Indexed: 10/20/2022]
Abstract
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
Collapse
|
48
|
Horton L, Goldin-Meadow S, Coppola M, Senghas A, Brentari D. Forging a morphological system out of two dimensions: Agentivity and number. OPEN LINGUISTICS 2015; 1:596-613. [PMID: 26740937 PMCID: PMC4699575 DOI: 10.1515/opli-2015-0021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature - unpunctuated repetition - in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).
Collapse
|
49
|
Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proc Natl Acad Sci U S A 2015; 112:11684-9. [PMID: 26283352 DOI: 10.1073/pnas.1510527112] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Collapse
|
50
|
Weisberg J, McCullough S, Emmorey K. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends. BRAIN AND LANGUAGE 2015; 147:96-106. [PMID: 26177161 PMCID: PMC5769874 DOI: 10.1016/j.bandl.2015.05.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 04/17/2015] [Accepted: 05/16/2015] [Indexed: 05/29/2023]
Abstract
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Collapse
|