1
|
Enhancing Oral Hygiene in Children With Hearing Impairment: The Impact of Skit Video Interventions -A Randomized Controlled Trial. Glob Pediatr Health 2024; 11:2333794X241240302. [PMID: 38529336 PMCID: PMC10962031 DOI: 10.1177/2333794x241240302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 03/27/2024] Open
Abstract
Aim. This study aimed to assess the effectiveness of 3 interventions-skit video, pictorial, and sign language-in improving the oral hygiene of children with hearing impairment. Materials and Methods. Sixty children randomly divided into 3 groups: Skit video, Pictorial, and Sign language. The mean gingival and Oral Hygiene Index scores were recorded before and after interventions. A 1-way ANOVA was used for statistically significant difference between pre and post intervention scores. Results. A significant difference in mean oral hygiene and gingival index scores before and after interventions was found in Group A (P < .005). A statistically significant difference was also found between group A and B in inter group comparison of OHI and GI scores post intervention (P < .004). Conclusion. Skit video and pictorial intervention effectively improves oral health resulting in reduced mean oral hygiene and gingival scores.
Collapse
|
2
|
Dentists' considerations concerning treating patients from the Deaf and Hard of Hearing community: A national survey. J Dent Educ 2024; 88:278-288. [PMID: 37921409 DOI: 10.1002/jdd.13405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 10/10/2023] [Accepted: 10/20/2023] [Indexed: 11/04/2023]
Abstract
OBJECTIVES Research shows that adults who were Deaf or Hard of Hearing (HoH) had poorer oral health than adults who did not belong to this community. The objectives were to assess dentists' education, knowledge, attitudes, and professional behavior related to treating patients from the Deaf or HoH community and the relationships between these constructs. METHODS A total of 207 members of the American Dental Association and the Michigan Dental Association responded to a mailed or web-based survey concerning their education, knowledge, attitudes, and professional behavior related to treating patients from the Deaf or HoH community. RESULTS On average, the respondents disagreed that they were well educated in classroom-based, clinical, or community-based dental school settings (five-point answer scale with 1 = disagree strongly; mean = 2.29/2.27/2.35) or by their professional organization (mean = 2.00) about treating Deaf or HoH patients. However, the more recently the respondents had graduated from dental school, the better they described their education about this topic (r = 0.29; p < 0.001). Additionally, 45.9% agreed/strongly agreed that they would like to attend a continuing education course about this topic; 68.9% agreed/agreed strongly that negative consequences for patients' general health can occur; and 61.1% that patients cannot be well educated about oral hygiene if Deaf or HoH patients do not have appropriate interpretive support in dental offices. The better dentists were educated about this topic, the more knowledge they had (r = 0.50; p < 0.001). On average, the respondents agreed more strongly that they were comfortable treating adult patients who communicated orally than patients using American Sign Language (4.02 vs. 3.25; p < 0.001). CONCLUSIONS These findings show that efforts are needed to improve dental school and continuing education curricula about dental treatment for Deaf and HoH patients. The more recently the respondents had graduated, the more positively they described their education. Increased dental school and continuing education efforts are still urgently needed.
Collapse
|
3
|
Synthetic Corpus Generation for Deep Learning-Based Translation of Spanish Sign Language. SENSORS (BASEL, SWITZERLAND) 2024; 24:1472. [PMID: 38475008 DOI: 10.3390/s24051472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/12/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024]
Abstract
Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE.
Collapse
|
4
|
Promoting Arabic Sign Language Skills Among Dental Students. J Multidiscip Healthc 2024; 17:171-176. [PMID: 38222476 PMCID: PMC10788059 DOI: 10.2147/jmdh.s420388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 10/31/2023] [Indexed: 01/16/2024] Open
Abstract
Purpose While the services available to deaf people in the Middle East have yet to be documented, they need improvement in several countries. The aim of this article was to reduce miscommunication between dentists and deaf patients through the introduction of an optional sign language course for pre-doctoral students and faculty of dentistry at King Abdulaziz University (KAUFD). Patients and Methods All fourth-year pre-doctoral students were invited to participate in an Arabic sign language course. A survey with 11 multiple choice and 38 true/false questions with an "I don't know" option was distributed, both before and two weeks after the course. This survey was extensively validated and pilot-tested before distribution. Results The response rate was 141 students (84.9%), 49 of which were males (34.8%) and 92 of which were females (65.2%). The pre-doctoral students had a higher overall knowledge score (mean 22.9±14.8) and sign language skills (11.1±1.7) after the course compared to before the course (9.8±7.1, and 3.7±3.3, respectively) (all P-value <0.001). All the pre-course individual questions had lower scores compared to the post-course questions (P-value <0.05). Conclusion Deaf people might face difficulties communicating at dental health care clinics, which may be improved by equipping dentistry providers with cultural competency training, like this course.
Collapse
|
5
|
Give Me a Sign: Using Data Gloves for Static Hand-Shape Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:9847. [PMID: 38139692 PMCID: PMC10747392 DOI: 10.3390/s23249847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone is not always desirable (e.g., silent commands during task-force training) or simply not possible (e.g., if the user has hearing loss). Data gloves help to increase immersion within VR, as they correspond to our natural interaction. At the same time, they offer the possibility of accurately capturing hand shapes, such as those used in non-verbal communication (e.g., thumbs up, okay gesture, …) and in sign language. In this paper, we present a hand-shape recognition system using Manus Prime X data gloves, including data acquisition, data preprocessing, and data classification to enable nonverbal communication within VR. We investigate the impact on accuracy and classification time of using an outlier detection and a feature selection approach in our data preprocessing. To obtain a more generalized approach, we also studied the impact of artificial data augmentation, i.e., we created new artificial data from the recorded and filtered data to augment the training data set. With our approach, 56 different hand shapes could be distinguished with an accuracy of up to 93.28%. With a reduced number of 27 hand shapes, an accuracy of up to 95.55% could be achieved. The voting meta-classifier (VL2) proved to be the most accurate, albeit slowest, classifier. A good alternative is random forest (RF), which was even able to achieve better accuracy values in a few cases and was generally somewhat faster. outlier detection was proven to be an effective approach, especially in improving the classification time. Overall, we have shown that our hand-shape recognition system using data gloves is suitable for communication within VR.
Collapse
|
6
|
Editorial: Modality and language acquisition: how does the channel through which language is expressed affect how children and adults are able to learn? Front Psychol 2023; 14:1334171. [PMID: 38111863 PMCID: PMC10726121 DOI: 10.3389/fpsyg.2023.1334171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 11/13/2023] [Indexed: 12/20/2023] Open
|
7
|
Instrument to evaluate the perception of minimal contrasts in Chilean Sign Language - reliability evidence. Codas 2023; 35:e20220184. [PMID: 38055413 PMCID: PMC10750821 DOI: 10.1590/2317-1782/20232022184es] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 08/02/2023] [Indexed: 12/08/2023] Open
Abstract
PURPOSE Obtain evidence of the test reliability to evaluate the perception of minimum contrasts in Chilean Sign Language (LSCh). METHODS Ten deaf children and adolescents aged between 7 and 14 years participated in this study. They were evaluated with the test of perception of minimal contrasts in LSCh. The test was reapplied 11 and 14 days after the first application (test-retest reliability). Spearman's Rho correlation was performed. During the first application, authorization was requested from the parents of the children and adolescents to record the responses of the participants so that another evaluator could re-score the protocols, in order to obtain inter-rater reliability. First-order agreement coefficient (AC1) Gwet's was used for data analysis. RESULTS Test-retest obtained a strong and significant correlation (Rho= 0.741; p=0.014). The concordance values obtained inter-rater vary between 0.962 and 1 (p<0.001), indicating that the test presents almost perfect concordance. CONCLUSION The minimum pairs perception test in LSCh presents satisfactory test-retest and inter-rater reliability.
Collapse
|
8
|
Time course from cochlear implant surgery to non-use for congenitally deaf recipients implanted as children over ten years ago. FRONTIERS IN REHABILITATION SCIENCES 2023; 4:1283109. [PMID: 38107197 PMCID: PMC10722283 DOI: 10.3389/fresc.2023.1283109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/17/2023] [Indexed: 12/19/2023]
Abstract
Objective To determine the time-course from first cochlear implantation to non-use, to characterise non-users' receptive and expressive communication, and document known risk factors for inconsistent use, for congenitally deaf non-users of cochlear implants implanted as children at least ten years ago. Methods Retrospective service evaluation. All congenitally deaf patients who received a first cochlear implant as children at least ten years ago at a regional service, and were currently non-users, were identified. They were characterised in terms of ages at implantation and non-use, known risk factors for inconsistent CI use or CI non-use, and outcome measures were the Meaningful Auditory Integration Scale (MAIS) and Meaningful Use of Speech Scale (MUSS) scores. Results Seventeen patients met the inclusion criteria. They were implanted from 1990 to 2006. Median age at implantation was 4 years (range: 2-11), median age at non-use was 17 years (range: 9-31), and median duration of use was 8.5 years (range: 4-25). All used sign or gesture as their primary expressive and receptive communication modes. In addition, each child had at least one other known risk factor for inconsistent CI use. At 3 years post-implantation, mean Parent-rated MAIS scores were 76.5% (N = 14), and mean MUSS scores were 43.1% (N = 9). Discussion This cohort included cases where CI use was rejected following longer periods of time than previously reported, highlighting a need for long-term support, particularly around the ages of life transitions. Studies conducted when the earliest cohort of paediatric CI users were younger, and studies reliant on parent or patient reports, may under-estimate long-term non-use rates. No non-users were identified among congenitally-deaf children implanted 10-15 years ago. Further research is warranted to explore relationships between risk factors, including communication mode, and non-use to inform expectation setting and candidacy selection.
Collapse
|
9
|
Sign Language Dataset for Automatic Motion Generation. J Imaging 2023; 9:262. [PMID: 38132680 PMCID: PMC10744067 DOI: 10.3390/jimaging9120262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/20/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023] Open
Abstract
Several sign language datasets are available in the literature. Most of them are designed for sign language recognition and translation. This paper presents a new sign language dataset for automatic motion generation. This dataset includes phonemes for each sign (specified in HamNoSys, a transcription system developed at the University of Hamburg, Hamburg, Germany) and the corresponding motion information. The motion information includes sign videos and the sequence of extracted landmarks associated with relevant points of the skeleton (including face, arms, hands, and fingers). The dataset includes signs from three different subjects in three different positions, performing 754 signs including the entire alphabet, numbers from 0 to 100, numbers for hour specification, months, and weekdays, and the most frequent signs used in Spanish Sign Language (LSE). In total, there are 6786 videos and their corresponding phonemes (HamNoSys annotations). From each video, a sequence of landmarks was extracted using MediaPipe. The dataset allows training an automatic system for motion generation from sign language phonemes. This paper also presents preliminary results in motion generation from sign phonemes obtaining a Dynamic Time Warping distance per frame of 0.37.
Collapse
|
10
|
Sign Language Motion Generation from Sign Characteristics. SENSORS (BASEL, SWITZERLAND) 2023; 23:9365. [PMID: 38067738 PMCID: PMC10708553 DOI: 10.3390/s23239365] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 11/15/2023] [Accepted: 11/17/2023] [Indexed: 12/18/2023]
Abstract
This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.
Collapse
|
11
|
Sign Language Studies with Chimpanzees in Sanctuary. Animals (Basel) 2023; 13:3486. [PMID: 38003104 PMCID: PMC10668751 DOI: 10.3390/ani13223486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/20/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023] Open
Abstract
Adult chimpanzees Tatu and Loulis lived at the Fauna Foundation sanctuary. They had acquired signs of American Sign Language (ASL) while young and continued to use them as adults. Caregivers with proficiency in ASL maintained daily sign language records during interactions and passive observation. Sign checklists were records of daily vocabulary use. Sign logs were records of signed interactions with caregivers and other chimpanzees. This study reports sign use from eight years of these records. Tatu and Loulis used a majority of their base vocabularies consistently over the study period. They used signs that they had acquired decades earlier and new signs. Their utterances served a variety of communicative functions, including responses, conversational devices, requests, and descriptions. They signed to caregivers, other chimpanzees, including those who did not use signs, and to themselves privately. This indicates the importance of a stimulating and interactive environment to understand the scope of ape communication and, in particular, their use of sign language.
Collapse
|
12
|
Prediction underlying comprehension of human motion: an analysis of Deaf signer and non-signer EEG in response to visual stimuli. Front Neurosci 2023; 17:1218510. [PMID: 37901437 PMCID: PMC10602904 DOI: 10.3389/fnins.2023.1218510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
Introduction Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.
Collapse
|
13
|
Barriers and facilitators to the inclusion of deaf people in clinical trials. Clin Trials 2023; 20:576-580. [PMID: 37243366 PMCID: PMC10524313 DOI: 10.1177/17407745231177376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND/AIMS This article discusses the barriers that prevent deaf people from participating in clinical trials and offers recommendations to overcome these barriers and ensure equal access to study participation. METHODS Between April and May 2022, we conducted six focus groups with 20 deaf adults who use American Sign Language, all of whom had previous experience as research study participants. Focus group prompts queried community awareness of clinical trial opportunities, barriers and facilitators to deaf people's participation in clinical trials, and recommended resources to improve clinical trial access. This qualitative focus group data is supplemented by survey data gathered from 40 principal investigators and clinical research coordinators between November 2021 and December 2021. The survey queried researchers' prior experiences with enrolling deaf participants in clinical trials and strategies they endorse for enrollment of deaf participants in future clinical trials. RESULTS Focus group participants unanimously agreed that, compared to the general hearing population, deaf sign language users lack equivalent access to clinical trial participation. Reported barriers included lack of awareness of clinical trial opportunities, mistrust of hearing researchers, and refusal by clinical trial staff to provide accessible communication (e.g. denial of requests for sign language interpreters). Survey data from 40 principal investigators and clinical research coordinators corroborated these barriers. For example, only 2 out of 40 survey respondents had ever enrolled a deaf person in a clinical trial. Respondents indicated that the most helpful strategies for including deaf sign language users in future clinical trials would be assistance with making recruitment information accessible to deaf sign language users and assistance in identifying qualified interpreters to hire to help facilitate the informed consent process. CONCLUSION The lack of communication accessibility is the most common factor preventing deaf sign language users from participating in clinical trials. This article provides recommendations for hearing researchers to improve deaf people's access to clinical trials moving forward, drawing from mixed-methods data.
Collapse
|
14
|
Does early exposure to spoken and sign language affect reading fluency in deaf and hard-of-hearing adult signers? Front Psychol 2023; 14:1145638. [PMID: 37799519 PMCID: PMC10548548 DOI: 10.3389/fpsyg.2023.1145638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 08/31/2023] [Indexed: 10/07/2023] Open
Abstract
Introduction Early linguistic background, and in particular, access to language, lays the foundation of future reading skills in deaf and hard-of-hearing signers. The current study aims to estimate the impact of two factors - early access to sign and/or spoken language - on reading fluency in deaf and hard-of-hearing adult Russian Sign Language speakers. Methods In the eye-tracking experiment, 26 deaf and 14 hard-of-hearing native Russian Sign Language speakers read 144 sentences from the Russian Sentence Corpus. Analysis of global eye-movement trajectories (scanpaths) was used to identify clusters of typical reading trajectories. The role of early access to sign and spoken language as well as vocabulary size as predictors of the more fluent reading pattern was tested. Results Hard-of-hearing signers with early access to sign language read more fluently than those who were exposed to sign language later in life or deaf signers without access to speech sounds. No association between early access to spoken language and reading fluency was found. Discussion Our results suggest a unique advantage for the hard-of-hearing individuals from having early access to both sign and spoken language and support the existing claims that early exposure to sign language is beneficial not only for deaf but also for hard-of-hearing children.
Collapse
|
15
|
Sign learning of hearing children in inclusive day care centers-does iconicity matter? Front Psychol 2023; 14:1196114. [PMID: 37655202 PMCID: PMC10467423 DOI: 10.3389/fpsyg.2023.1196114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/22/2023] [Indexed: 09/02/2023] Open
Abstract
An increasing number of experimental studies suggest that signs and gestures can scaffold vocabulary learning for children with and without special educational needs and/or disabilities (SEND). However, little research has been done on the extent to which iconicity plays a role in sign learning, particularly in inclusive day care centers. This current study investigated the role of iconicity in the sign learning of 145 hearing children (2;1 to 6;3 years) from inclusive day care centers with educators who started using sign-supported speech after a training module. Children's sign use was assessed via a questionnaire completed by their educators. We found that older children were more likely to learn signs with a higher degree of iconicity, whereas the learning of signs by younger children was less affected by iconicity. Children with SEND did not benefit more from iconicity than children without SEND. These results suggest that whether iconicity plays a role in sign learning depends on the age of the children.
Collapse
|
16
|
Range of motion required for Auslan: a biomechanical analysis. ANZ J Surg 2023; 93:1930-1934. [PMID: 37341153 DOI: 10.1111/ans.18542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 05/08/2023] [Accepted: 05/18/2023] [Indexed: 06/22/2023]
Abstract
BACKGROUND Auslan is used by the Australian deaf community and relies heavily on hand, wrist, and elbow movement. Upper limb injury or dysfunction may require surgical intervention to alleviate pain and provide a stable skeleton for function, leading to partial or complete reduction in motion. The aim of this study was to assess the wrist, forearm, and elbow motion required to communicate via Auslan, to tailor optimal interventions in this population. METHODS A biomechanical analysis was conducted on two native Auslan communicators, who signed 28 pre-selected and common Auslan words and phrases. RESULTS Sagittal plane wrist and elbow motion was found to be of greater importance than axial plane forearm rotation. Relative elbow flexion and generous wrist motion was common for many of the words and phrases, while end-range elbow extension was not recorded. CONCLUSION The maintenance of wrist and elbow motion should be prioritized when selecting surgical interventions for patients who communicate using Auslan.
Collapse
|
17
|
Disparities impacting the deaf and hard of hearing: a narrative and approaches to closing health care gaps. Can J Anaesth 2023; 70:975-977. [PMID: 37165124 PMCID: PMC10171728 DOI: 10.1007/s12630-023-02453-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/24/2022] [Accepted: 05/24/2022] [Indexed: 05/12/2023] Open
|
18
|
Written products and writing processes in Swedish deaf and hard of hearing children: an explorative study on the impact of linguistic background. Front Psychol 2023; 14:1112263. [PMID: 37228344 PMCID: PMC10203585 DOI: 10.3389/fpsyg.2023.1112263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 04/07/2023] [Indexed: 05/27/2023] Open
Abstract
The small body of research on writing and writing processes in the group of deaf and hard of hearing (DHH) children has shown that this group struggles more with writing than their hearing peers. This article aims to explore in what ways the DHH group differs from their peers regarding the written product and the writing processes. Participants are all in the age span 10-12 years old and include: (a) 12 DHH children with knowledge of Swedish sign language (Svenskt teckenspråk, STS) as well as spoken Swedish, (b) 10 age-matched hearing children of deaf adults (CODA) who know STS, (c) 14 age-matched hearing peers with no STS knowledge. More specifically we investigate how text length and lexical properties relate to writing processes such as planning (measured through pauses) and revision, and how the background factors of age, gender, hearing and knowledge of STS predict the outcome in product and process. The data consists of picture-elicited narratives collected with keystroke logging. The overall results show that age is a strong predictor for writing fluency, longer texts and more sophisticated lexicon for all the children. This confirms theories on writing development which stress that when children have automatized basic low-level processes such as transcription and spelling, this will free up cognitive space for engaging in high-level processes, such as planning and revision-which in turn will result in more mature texts. What characterizes the DHH group is slower writing fluency, higher lexical density, due to omitted function words, and extensive revisions (both deletions and insertions) on word level and below. One explanation for the last finding is that limitations in the auditory input lead to more uncertainty regarding correct and appropriate lexical choices, as well as spelling. The article contributes with more specific knowledge on what is challenging during writing for DHH children with knowledge of STS and spoken Swedish in middle school, in the developmental stage when basic writing skills are established.
Collapse
|
19
|
Parent American Sign Language skills correlate with child-but not toddler-ASL vocabulary size. LANGUAGE ACQUISITION 2023; 31:85-99. [PMID: 38510461 PMCID: PMC10950064 DOI: 10.1080/10489223.2023.2178312] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 12/24/2022] [Indexed: 03/22/2024]
Abstract
Most deaf children have hearing parents who do not know a sign language at birth, and are at risk of limited language input during early childhood. Studying these children as they learn a sign language has revealed that timing of first-language exposure critically shapes language outcomes. But the input deaf children receive in their first language is not only delayed, it is much more variable than most first language learners, as many learn their first language from parents who are themselves new sign language learners. Much of the research on deaf children learning a sign language has considered the role of parent input using broad strokes, categorizing hearing parents as non-native, poor signers, and deaf parents as native, strong signers. In this study, we deconstruct these categories, and examine how variation in sign language skills among hearing parents might affect children's vocabulary acquisition. This study included 44 deaf children between 8- and 60-months-old who were learning ASL and had hearing parents who were also learning ASL. We observed an interactive effect of parent ASL proficiency and age, such that parent ASL proficiency was a significant predictor of child ASL vocabulary size, but not among the infants and toddlers. The proficiency of language models can affect acquisition above and beyond age of acquisition, particularly as children grow. At the same time, the most skilled parents in this sample were not as fluent as "native" deaf signers, and yet their children reliably had age-expected ASL vocabularies. Data and reproducible analyses are available at https://osf.io/9ya6h/.
Collapse
|
20
|
Stronger neural response to canonical finger-number configurations in deaf compared to hearing adults revealed by FPVS-EEG. Hum Brain Mapp 2023; 44:3555-3567. [PMID: 37021789 DOI: 10.1002/hbm.26297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 02/27/2023] [Accepted: 03/21/2023] [Indexed: 04/07/2023] Open
Abstract
The linguistic counting system of deaf signers consists of a manual counting format that uses specific structures for number words. Interestingly, the number signs from 1 to 4 in the Belgian sign languages correspond to the finger-montring habits of hearing individuals. These hand configurations could therefore be considered as signs (i.e., part of a language system) for deaf, while they would simply be number gestures (not linguistic) for hearing controls. A Fast Periodic Visual Stimulation design was used with electroencephalography recordings to examine whether these finger-number configurations are differently processed by the brain when they are signs (in deaf signers) as compared to when they are gestures (in hearing controls). Results showed that deaf signers show stronger discrimination responses to canonical finger-montring configurations compared to hearing controls. A second control experiment furthermore demonstrated that this finding was not merely due to the experience deaf signers have with the processing of hand configurations, as brain responses did not differ between groups for finger-counting configurations. Number configurations are therefore processed differently by deaf signers, but only when these configurations are part of their language system.
Collapse
|
21
|
The Relationship between Knowing Sign Language and Quality of Life among Italian People Who Are Deaf: A Cross-Sectional Study. Healthcare (Basel) 2023; 11:healthcare11071021. [PMID: 37046948 PMCID: PMC10094216 DOI: 10.3390/healthcare11071021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 03/24/2023] [Accepted: 04/01/2023] [Indexed: 04/14/2023] Open
Abstract
Deafness is a medical condition with important relational implications. This condition could affect well-being and self-esteem and cause social anxiety. Sign language is not only a simple mimic but can be considered as a different kind of communication that could be protective for those who have learned it. However, some people do not use sign language because they think it can be marginalizing. The present study aimed to compare the quality of life (QoL) between people who learned Italian sign language as their first language with those who had never learned it or learned it later. This cross-sectional study involved 182 deaf Italian adults (70.3% females) who were recruited from Ente Nazionale Sordi (ENS) and by the main online deafness groups. The present results suggest that the deaf condition does not seem to significantly affect the dimensions of QoL pertaining to satisfaction and self-esteem, while it could have an effect on preventing high levels of social anxiety and in particular, the group who learned Italian sign language showed significantly less social anxiety than those who had never learned it.
Collapse
|
22
|
Universal Constraints on Linguistic Event Categories: A Cross-Cultural Study of Child Homesign. Psychol Sci 2023; 34:298-312. [PMID: 36608154 DOI: 10.1177/09567976221140328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Languages carve up conceptual space in varying ways-for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups-events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.
Collapse
|
23
|
Restricted language access during childhood affects adult brain structure in selective language regions. Proc Natl Acad Sci U S A 2023; 120:e2215423120. [PMID: 36745780 PMCID: PMC9963327 DOI: 10.1073/pnas.2215423120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 01/04/2023] [Indexed: 02/08/2023] Open
Abstract
Due to the ubiquitous nature of language in the environment of infants, how it affects the anatomical structure of the brain language system over the lifespan is not well understood. In this study, we investigated the effects of early language experience on the adult brain by examining anatomical features of individuals born deaf with typical or restricted language experience in early childhood. Twenty-two deaf adults whose primary language was American Sign Language and were first immersed in it at ages ranging from birth to 14 y participated. The control group was 21 hearing non-signers. We acquired T1-weighted magnetic resonance images and used FreeSurfer [B. Fischl, Neuroimage 62, 774-781(2012)] to reconstruct the brain surface. Using an a priori regions of interest (ROI) approach, we identified 17 language and 19 somatomotor ROIs in each hemisphere from the Human Connectome Project parcellation map [M. F. Glasser et al., Nature 536, 171-178 (2016)]. Restricted language experience in early childhood was associated with negative changes in adjusted grey matter volume and/or cortical thickness in bilateral fronto-temporal regions. No evidence of anatomical differences was observed in any of these regions when deaf signers with infant sign language experience were compared with hearing speakers with infant spoken language experience, showing that the effects of early language experience on the brain language system are supramodal.
Collapse
|
24
|
Fusion-Based Body-Worn IoT Sensor Platform for Gesture Recognition of Autism Spectrum Disorder Children. SENSORS (BASEL, SWITZERLAND) 2023; 23:1672. [PMID: 36772712 PMCID: PMC9918961 DOI: 10.3390/s23031672] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/17/2023] [Accepted: 01/30/2023] [Indexed: 06/15/2023]
Abstract
The last decade's developments in sensor technologies and artificial intelligence applications have received extensive attention for daily life activity recognition. Autism spectrum disorder (ASD) in children is a neurological development disorder that causes significant impairments in social interaction, communication, and sensory action deficiency. Children with ASD have deficits in memory, emotion, cognition, and social skills. ASD affects children's communication skills and speaking abilities. ASD children have restricted interests and repetitive behavior. They can communicate in sign language but have difficulties communicating with others as not everyone knows sign language. This paper proposes a body-worn multi-sensor-based Internet of Things (IoT) platform using machine learning to recognize the complex sign language of speech-impaired children. Optimal sensor location is essential in extracting the features, as variations in placement result in an interpretation of recognition accuracy. We acquire the time-series data of sensors, extract various time-domain and frequency-domain features, and evaluate different classifiers for recognizing ASD children's gestures. We compare in terms of accuracy the decision tree (DT), random forest, artificial neural network (ANN), and k-nearest neighbour (KNN) classifiers to recognize ASD children's gestures, and the results showed more than 96% recognition accuracy.
Collapse
|
25
|
Language-related motor facilitation in Italian Sign Language signers. Cereb Cortex 2023:6988100. [PMID: 36646456 DOI: 10.1093/cercor/bhac536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 01/18/2023] Open
Abstract
Linguistic tasks facilitate corticospinal excitability as revealed by increased motor evoked potential (MEP) induced by transcranial magnetic stimulation (TMS) in the dominant hand. This modulation of the primary motor cortex (M1) excitability may reflect the relationship between speech and gestures. It is conceivable that in healthy individuals who use a sign language this cortical excitability modulation could be rearranged. The aim of this study was to evaluate the effect of spoken language tasks on M1 excitability in a group of hearing signers. Ten hearing Italian Sign Language (LIS) signers and 16 non-signer healthy controls participated. Single-pulse TMS was applied to either M1 hand area at the baseline and during different tasks: (i) reading aloud, (ii) silent reading, (iii) oral movements, (iv) syllabic phonation and (v) looking at meaningless non-letter strings. Overall, M1 excitability during the linguistic and non-linguistic tasks was higher in LIS group compared to the control group. In LIS group, MEPs were significantly larger during reading aloud, silent reading and non-verbal oral movements, regardless the hemisphere. These results suggest that in hearing signers there is a different modulation of the functional connectivity between the speech-related brain network and the motor system.
Collapse
|
26
|
Formal caregivers' perceptions of everyday interaction with Deaf people with dementia. Clin Gerontol 2023:1-14. [PMID: 36639979 DOI: 10.1080/07317115.2023.2167623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
OBJECTIVES Deteriorating interactive ability of people with dementia challenges formal caregivers. In Finland, Deaf people with advanced dementia may live in a nursing home designed for their care where the staff use Finnish Sign Language (FiSL). This study describes the perceptions of formal caregivers, focusing on the challenges, how they solve the challenges, and what support they need to improve interaction with Deaf residents. METHODS Semi-structured interviews with 13 formal caregivers who work with Deaf people with dementia were conducted and analyzed using qualitative content analysis. A purposive sampling was used. RESULTS Three key themes were challenges in interaction, strategies in supporting interaction, and support for coping. Caregivers perceived challenges in interaction caused by linguistic changes, deteriorating physical mobility and memory, and Deaf residents' behavioral challenges. Caregivers supported Deaf residents by learning to know them and using personal and linguistic strategies. Support for coping comprised supporting family members and other caregivers. CONCLUSIONS Efficient skills in sign language (SL) and knowledge of dementia are essential in interacting with Deaf residents and to build interpersonal relationships for care. CLINICAL IMPLICATIONS Supporting Deaf residents requires learning the way they interact which can be achieved over time.
Collapse
|
27
|
Processing Real-Life Recordings of Facial Expressions of Polish Sign Language Using Action Units. ENTROPY (BASEL, SWITZERLAND) 2023; 25:120. [PMID: 36673261 PMCID: PMC9857566 DOI: 10.3390/e25010120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 11/30/2022] [Accepted: 12/30/2022] [Indexed: 06/17/2023]
Abstract
Automatic translation between the national language and sign language is a complex process similar to translation between two different foreign languages. A very important aspect is the precision of not only manual gestures but also facial expressions, which are extremely important in the overall context of a sentence. In this article, we present the problem of including facial expressions in the automation of Polish-to-Polish Sign Language (PJM) translation-this is part of an ongoing project related to a comprehensive solution allowing for the animation of manual gestures, body movements and facial expressions. Our approach explores the possibility of using action unit (AU) recognition in the automatic annotation of recordings, which in the subsequent steps will be used to train machine learning models. This paper aims to evaluate entropy in real-life translation recordings and analyze the data associated with the detected action units. Our approach has been subjected to evaluation by experts related to Polish Sign Language, and the results obtained allow for the development of further work related to automatic translation into Polish Sign Language.
Collapse
|
28
|
Application of the truth and reconciliation model to meaningfully engage deaf sign language users in the research process. CULTURAL DIVERSITY & ETHNIC MINORITY PSYCHOLOGY 2023; 29:15-23. [PMID: 34197145 PMCID: PMC8720115 DOI: 10.1037/cdp0000445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVES One of the most underrepresented public health populations is the U.S. Deaf community-a minority group of 500,000 + individuals who communicate using American Sign Language (ASL). Research on Deaf health outcomes is significantly lacking due to inaccessible research procedures and mistrust of researchers that stems from historical mistreatment of Deaf people (i.e., Audism). METHODS Following the Truth and Reconciliation Model, we hosted three Deaf community forums between October and November 2016 across New England. We invited attendees to share their experiences in the research world and make recommendations about how researchers can better include Deaf people in their studies. A select group of hearing researchers served as representatives of the research community and to issue a formal apology on behalf of this community. RESULTS Forum attendees (n = 22; 5% racial/ethnic minority; 59% female) emphasized the following themes: Research conducted within general population samples is not an activity in which Deaf people can or will be included; a general mistrust of hearing people, including hearing researchers; researchers' frequent failure to communicate study results back to the Deaf community or the community-at-large; and a tendency of researchers to directly benefit from data provided by Deaf participants, without making any subsequent efforts to return to the community to give back or provide useful intervention. CONCLUSIONS Many injustices and forms of mistreatment are still ongoing; therefore, we recognize that our team's efforts to foster an open dialogue between the research community and the Deaf community must be an ongoing, iterative practice. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
29
|
A multimodal human-robot sign language interaction framework applied in social robots. Front Neurosci 2023; 17:1168888. [PMID: 37113147 PMCID: PMC10126358 DOI: 10.3389/fnins.2023.1168888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 03/20/2023] [Indexed: 04/29/2023] Open
Abstract
Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots.
Collapse
|
30
|
Arithmetic in the signing brain: Differences and similarities in arithmetic processing between deaf signers and hearing non-signers. J Neurosci Res 2023; 101:172-195. [PMID: 36259315 PMCID: PMC9828253 DOI: 10.1002/jnr.25138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 09/07/2022] [Accepted: 10/06/2022] [Indexed: 01/12/2023]
Abstract
Deaf signers and hearing non-signers have previously been shown to recruit partially different brain regions during simple arithmetic. In light of the triple code model, the differences were interpreted as relating to stronger recruitment of the verbal system of numerical processing, that is, left angular and inferior frontal gyrus, in hearing non-signers, and of the quantity system of numerical processing, that is, right horizontal intraparietal sulcus, for deaf signers. The main aim of the present study was to better understand similarities and differences in the neural correlates supporting arithmetic in deaf compared to hearing individuals. Twenty-nine adult deaf signers and 29 hearing non-signers were enrolled in an functional magnetic resonance imaging study of simple and difficult subtraction and multiplication. Brain imaging data were analyzed using whole-brain analysis, region of interest analysis, and functional connectivity analysis. Although the groups were matched on age, gender, and nonverbal intelligence, the deaf group performed generally poorer than the hearing group in arithmetic. Nevertheless, we found generally similar networks to be involved for both groups, the only exception being the involvement of the left inferior frontal gyrus. This region was activated significantly stronger for the hearing compared to the deaf group but showed stronger functional connectivity with the left superior temporal gyrus in the deaf, compared to the hearing, group. These results lend no support to increased recruitment of the quantity system in deaf signers. Perhaps the reason for performance differences is to be found in other brain regions not included in the original triple code model.
Collapse
|
31
|
Light-Weight Deep Learning Techniques with Advanced Processing for Real-Time Hand Gesture Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 23:2. [PMID: 36616601 PMCID: PMC9823561 DOI: 10.3390/s23010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/14/2022] [Accepted: 12/15/2022] [Indexed: 06/17/2023]
Abstract
In the discipline of hand gesture and dynamic sign language recognition, deep learning approaches with high computational complexity and a wide range of parameters have been an extremely remarkable success. However, the implementation of sign language recognition applications for mobile phones with restricted storage and computing capacities is usually greatly constrained by those limited resources. In light of this situation, we suggest lightweight deep neural networks with advanced processing for real-time dynamic sign language recognition (DSLR). This paper presents a DSLR application to minimize the gap between hearing-impaired communities and regular society. The DSLR application was developed using two robust deep learning models, the GRU and the 1D CNN, combined with the MediaPipe framework. In this paper, the authors implement advanced processes to solve most of the DSLR problems, especially in real-time detection, e.g., differences in depth and location. The solution method consists of three main parts. First, the input dataset is preprocessed with our algorithm to standardize the number of frames. Then, the MediaPipe framework extracts hands and poses landmarks (features) to detect and locate them. Finally, the features of the models are passed after processing the unification of the depth and location of the body to recognize the DSL accurately. To accomplish this, the authors built a new American video-based sign dataset and named it DSL-46. DSL-46 contains 46 daily used signs that were presented with all the needed details and properties for recording the new dataset. The results of the experiments show that the presented solution method can recognize dynamic signs extremely fast and accurately, even in real-time detection. The DSLR reaches an accuracy of 98.8%, 99.84%, and 88.40% on the DSL-46, LSA64, and LIBRAS-BSL datasets, respectively.
Collapse
|
32
|
Sign advantage: Both children and adults' spatial expressions in sign are more informative than those in speech and gestures combined. JOURNAL OF CHILD LANGUAGE 2022:1-27. [PMID: 36510476 DOI: 10.1017/s0305000922000642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children's co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers' spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
Collapse
|
33
|
Toward an Understanding of the Experiences of Deaf Gay Men: An Interpretative Phenomenological Analysis to an Intersectional View. JOURNAL OF HOMOSEXUALITY 2022; 69:2412-2438. [PMID: 34698623 DOI: 10.1080/00918369.2021.1940015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Similarities between developing a deaf identity and a sexual minority identity have been postulated upon the parallel experience of oppressed minority positions. Sign language interviews with eight deaf gay British men explored their intersectional understanding of deaf-gay lived experiences, analyzed through Interpretative Phenomenological Analysis. During their adolescence deaf gay men sometimes experienced being in a position where they were trying hard to be something they were not: oral and heterosexual for hearing non-signing others (including heterosexual members of their family of origin). Participants spoke of increasingly being drawn toward a welcoming signing cultural world that supported them against deaf minority stress. Coming out as gay presented not only potential family of origin difficulties, but also threatened connection with the deaf community, leaving participants intensely fearful of gay visibility and stigma. Self-fulfillment and community building was sought through positions that ranged from oralist-heteronormativity through to the deaf-gay community. Along the way these journeys included experiences of pride and success alongside those of struggle. Our findings extend research on intersectionality by presenting a distinct set of obstacles, caveats, and nuances to identity conjunction.
Collapse
|
34
|
Impact of the Visual Performance Reinforcement Technique on Oral Hygiene Knowledge and Practices, Gingival Health, and Plaque Control in Hearing- and Speech-Impaired Adolescents: A Randomized Controlled Trial. CHILDREN (BASEL, SWITZERLAND) 2022; 9:children9121905. [PMID: 36553348 PMCID: PMC9777405 DOI: 10.3390/children9121905] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/25/2022] [Accepted: 11/29/2022] [Indexed: 12/12/2022]
Abstract
This study aimed to evaluate the impact of oral health education (OHE), incorporating a novel pre-validated visual performance reinforcement (VPR) technique and sign language, on gingival health, plaque control, and oral hygiene knowledge and practices in 12 to 15-year-old hearing- and speech-impaired adolescents. A double-blinded randomized controlled trial was conducted in a government school for deaf children in Belagavi, Karnataka, India. A total of 80 adolescents, aged 12-15 years, were randomly assigned, using a computer-generated table of random numbers, into two groups: Group A receiving the VPR technique (n = 40), and Group B receiving sign language (n = 40). A specially designed pre-validated closed-ended questionnaire was administered to both groups, followed by clinical examination to obtain the gingival and plaque index, before intervention and at a 16-week follow-up period. Group A showed a significant increase in the knowledge gained when compared to Group B. Similarly, a significant improvement in oral hygiene practices was also observed in Group A. However, at the 16-week follow-up, there were no statistically significant differences in gingival and plaque scores between the groups. OHE using the VPR technique can be as effective and satisfactory as sign language in the reduction of gingival and plaque scores and in the improvement of knowledge and its application in oral hygiene maintenance among hearing- and speech-impaired adolescents.
Collapse
|
35
|
The iconic motivation for the morphophonological distinction between noun-verb pairs in American Sign Language does not reflect common human construals of objects and actions. LANGUAGE AND COGNITION 2022; 14:622-644. [PMID: 36426211 PMCID: PMC9681175 DOI: 10.1017/langcog.2022.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Collapse
|
36
|
Deaf Children Need Rich Language Input from the Start: Support in Advising Parents. CHILDREN (BASEL, SWITZERLAND) 2022; 9:children9111609. [PMID: 36360337 PMCID: PMC9688581 DOI: 10.3390/children9111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/19/2022] [Indexed: 01/25/2023]
Abstract
Bilingual bimodalism is a great benefit to deaf children at home and in schooling. Deaf signing children perform better overall than non-signing deaf children, regardless of whether they use a cochlear implant. Raising a deaf child in a speech-only environment can carry cognitive and psycho-social risks that may have lifelong adverse effects. For children born deaf, or who become deaf in early childhood, we recommend comprehensible multimodal language exposure and engagement in joint activity with parents and friends to assure age-appropriate first-language acquisition. Accessible visual language input should begin as close to birth as possible. Hearing parents will need timely and extensive support; thus, we propose that, upon the birth of a deaf child and through the preschool years, among other things, the family needs an adult deaf presence in the home for several hours every day to be a linguistic model, to guide the family in taking sign language lessons, to show the family how to make spoken language accessible to their deaf child, and to be an encouraging liaison to deaf communities. While such a support program will be complicated and challenging to implement, it is far less costly than the harm of linguistic deprivation.
Collapse
|
37
|
The puzzle of ideography. Behav Brain Sci 2022; 46:e233. [PMID: 36254782 DOI: 10.1017/s0140525x22002801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
Collapse
|
38
|
Language aptitude in the visuospatial modality: L2 British Sign Language acquisition and cognitive skills in British Sign Language-English interpreting students. Front Psychol 2022; 13:932370. [PMID: 36186342 PMCID: PMC9516300 DOI: 10.3389/fpsyg.2022.932370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 12/04/2022] Open
Abstract
Sign language interpreting (SLI) is a cognitively challenging task performed mostly by second language learners (i.e., not raised using a sign language as a home language). SLI students must first gain language fluency in a new visuospatial modality and then move between spoken and signed modalities as they interpret. As a result, many students plateau before reaching working fluency, and SLI training program drop-out rates are high. However, we know little about the requisite skills to become a successful interpreter: the few existing studies investigating SLI aptitude in terms of linguistic and cognitive skills lack baseline measures. Here we report a 3-year exploratory longitudinal skills assessments study with British Sign Language (BSL)-English SLI students at two universities (n = 33). Our aims were two-fold: first, to better understand the prerequisite skills that lead to successful SLI outcomes; second, to better understand how signing and interpreting skills impact other aspects of cognition. A battery of tasks was completed at four time points to assess skills, including but not limited to: multimodal and unimodal working memory, 2-dimensional and 3-dimensional mental rotation (MR), and English comprehension. Dependent measures were BSL and SLI course grades, BSL reproduction tests, and consecutive SLI tasks. Results reveal that initial BSL proficiency and 2D-MR were associated with selection for the degree program, while visuospatial working memory was linked to continuing with the program. 3D-MR improved throughout the degree, alongside some limited gains in auditory, visuospatial, and multimodal working memory tasks. Visuospatial working memory and MR were the skills closest associated with BSL and SLI outcomes, particularly those tasks involving sign language production, thus, highlighting the importance of cognition related to the visuospatial modality. These preliminary data will inform SLI training programs, from applicant selection to curriculum design.
Collapse
|
39
|
Sign Language Recognition Method Based on Palm Definition Model and Multiple Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176621. [PMID: 36081076 PMCID: PMC9460639 DOI: 10.3390/s22176621] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/28/2022] [Accepted: 08/29/2022] [Indexed: 06/01/2023]
Abstract
Technologies for pattern recognition are used in various fields. One of the most relevant and important directions is the use of pattern recognition technology, such as gesture recognition, in socially significant tasks, to develop automatic sign language interpretation systems in real time. More than 5% of the world's population-about 430 million people, including 34 million children-are deaf-mute and not always able to use the services of a living sign language interpreter. Almost 80% of people with a disabling hearing loss live in low- and middle-income countries. The development of low-cost systems of automatic sign language interpretation, without the use of expensive sensors and unique cameras, would improve the lives of people with disabilities, contributing to their unhindered integration into society. To this end, in order to find an optimal solution to the problem, this article analyzes suitable methods of gesture recognition in the context of their use in automatic gesture recognition systems, to further determine the most optimal methods. From the analysis, an algorithm based on the palm definition model and linear models for recognizing the shapes of numbers and letters of the Kazakh sign language are proposed. The advantage of the proposed algorithm is that it fully recognizes 41 letters of the 42 in the Kazakh sign alphabet. Until this time, only Russian letters in the Kazakh alphabet have been recognized. In addition, a unified function has been integrated into our system to configure the frame depth map mode, which has improved recognition performance and can be used to create a multimodal database of video data of gesture words for the gesture recognition system.
Collapse
|
40
|
Ongoing Sign Processing Facilitates Written Word Recognition in Deaf Native Signing Children. Front Psychol 2022; 13:917700. [PMID: 35992405 PMCID: PMC9390089 DOI: 10.3389/fpsyg.2022.917700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/24/2022] [Indexed: 11/13/2022] Open
Abstract
Signed and written languages are intimately related in proficient signing readers. Here, we tested whether deaf native signing beginning readers are able to make rapid use of ongoing sign language to facilitate recognition of written words. Deaf native signing children (mean 10 years, 7 months) received prime target pairs with sign word onsets as primes and written words as targets. In a control group of hearing children (matched in their reading abilities to the deaf children, mean 8 years, 8 months), spoken word onsets were instead used as primes. Targets (written German words) either were completions of the German signs or of the spoken word onsets. Task of the participants was to decide whether the target word was a possible German word. Sign onsets facilitated processing of written targets in deaf children similarly to spoken word onsets facilitating processing of written targets in hearing children. In both groups, priming elicited similar effects in the simultaneously recorded event related potentials (ERPs), starting as early as 200 ms after the onset of the written target. These results suggest that beginning readers can use ongoing lexical processing in their native language - be it signed or spoken - to facilitate written word recognition. We conclude that intimate interactions between sign and written language might in turn facilitate reading acquisition in deaf beginning readers.
Collapse
|
41
|
" Left Behind and Ignored": Increasing Awareness and Accessibility of Resources for Alzheimer's Disease and Related Dementias in the Deaf Community. Public Health Rep 2022:333549221110298. [PMID: 35915974 DOI: 10.1177/00333549221110298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
42
|
Musculoskeletal Diseases and Disorders in the Upper Limbs and Health Work-Related Quality of Life in Spanish Sign Language Interpreters and Guide-Interpreters. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19159038. [PMID: 35897409 PMCID: PMC9332704 DOI: 10.3390/ijerph19159038] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/11/2022] [Accepted: 07/19/2022] [Indexed: 12/10/2022]
Abstract
Disorders in the upper limbs are common among sign language interpreters and are related with different risk factors, among which are the difficulties of interpreting work in the educational setting, posture, and emotional together with physical stress. The aim of this study was to inquire about the different musculoskeletal disorders and diseases present in a group of sign language interpreters, and to examine its relationship with the work-related quality of life. A battery of four instruments was administered to 62 sign language interpreters, composed of a sociodemographic data and musculoskeletal disease questionnaire, a health-related quality of life measurement scale (SF-36), a measurement scale of the impact of fatigue (MFIS), and an instrument for assessing hand-function outcomes (MHOQ). All the study participants had presented some kind of musculoskeletal pathology during their work career, such as tendinitis, overuse syndrome, and repetitive strain injury. In addition, many of the participants present difficulties in occupational performance that affect their daily activities. A high percentage, close to 70%, of the interpreters suffer from musculoskeletal disorders, serious enough to modify their activities and affect both the quality of their work as interpreters and their quality of life, with important mediating variables being the number of diseases; physical, cognitive, and social fatigue; and satisfaction with the hand function.
Collapse
|
43
|
Video Relay Interpretation and Overcoming Barriers in Health Care for Deaf Users: Scoping Review. J Med Internet Res 2022; 24:e32439. [PMID: 35679099 PMCID: PMC9227653 DOI: 10.2196/32439] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 03/21/2022] [Accepted: 04/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Persons who are deaf are more likely to avoid health care providers than those who can hear, partially because of the lack of means of communication with these providers and the dearth of available interpreters. The use of video remote interpretation, namely the video camera on an electronic device, to connect deaf patients and health providers has rapidly expanded owing to its flexibility and advantageous cost compared with in-person sign language interpretation. Thus, we need to learn more about how this technology could effectively engage with and respond to the priorities of its users. OBJECTIVE We aimed to identify existing evidence regarding the use of video remote interpretation (VRI) in health care settings and to assess whether VRI technology can enable deaf users to overcome barriers to interpretation and improve communication outcomes between them and health care personnel. METHODS We conducted a search in 7 medical research databases (including MEDLINE, Web of Science, Embase, and Google Scholar) from 2006 including bibliographies and citations of relevant papers. The searches included articles in English, Spanish, and French. The eligibility criteria for study selection included original articles on the use of VRI for deaf or hard of hearing (DHH) sign language users for, or within, health care. RESULTS From the original 176 articles identified, 120 were eliminated after reading the article title and abstract, and 41 articles were excluded after they were fully read. In total, 15 articles were included in this study: 4 studies were literature reviews, 4 were surveys, 3 were qualitative studies, and 1 was a mixed methods study that combined qualitative and quantitative data, 1 brief communication, 1 quality improvement report, and 1 secondary analysis. In this scoping review, we identified a knowledge gap regarding the quality of interpretation and training in sign language interpretation for health care. It also shows that this area is underresearched, and evidence is scant. All evidence came from high-income countries, which is particularly problematic given that most DHH persons live in low- and middle-income countries. CONCLUSIONS Furthering our understanding of the use of VRI technology is pertinent and relevant. The available literature shows that VRI may enable deaf users to overcome interpretation barriers and can potentially improve communication outcomes between them and health personnel within health care services. For VRI to be acceptable, sign language users require a VRI system supported by devices with large screens and a reliable internet connection, as well as qualified interpreters trained on medical interpretation.
Collapse
|
44
|
The Sign 4 Big Feelings Intervention to Improve Early Years Outcomes in Preschool Children: Outcome Evaluation. JMIR Pediatr Parent 2022; 5:e25086. [PMID: 35594062 PMCID: PMC9166658 DOI: 10.2196/25086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/09/2021] [Accepted: 09/18/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Any delays in language development may affect learning, profoundly influencing personal, social, and professional trajectories. The effectiveness of the Sign 4 Big Feelings (S4BF) intervention was investigated by measuring changes in early years outcomes (EYOs) after a 3-month period. OBJECTIVE This study aims to determine whether children's well-being and EYOs significantly improve (beyond typical, expected development) after the S4BF intervention period and whether there are differences between boys and girls in progress achieved. METHODS An evaluation of the S4BF intervention was conducted with 111 preschool-age children in early years settings in Luton, United Kingdom. Listening, speaking, understanding, and managing feelings and behavior, in addition to the Leuven well-being scale, were assessed in a quasi-experimental study design to measure pre- and postintervention outcomes. RESULTS Statistically and clinically significant differences were found for each of the 7 pre- and postmeasures evaluated: words understood and spoken, well-being scores, and the 4 EYO domains. Gender differences were negligible in all analyses. CONCLUSIONS Children of all abilities may benefit considerably from S4BF, but a language-based intervention of this nature may be transformational for children who are behind developmentally, with English as an additional language, or of lower socioeconomic status. TRIAL REGISTRATION ISRCTN Registry ISRCTN42025531; https://doi.org/10.1186/ISRCTN42025531.
Collapse
|
45
|
Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
|
46
|
Identifying the Correlations Between the Semantics and the Phonology of American Sign Language and British Sign Language: A Vector Space Approach. Front Psychol 2022; 13:806471. [PMID: 35369213 PMCID: PMC8966728 DOI: 10.3389/fpsyg.2022.806471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/07/2022] [Indexed: 11/13/2022] Open
Abstract
Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computational analysis showed a significant positive relationship between phonological form and semantic meaning for both sign languages, which was strongest when the sign language lexicons were organized into clusters of semantically related signs. The analysis also revealed variation in the strength of patterns across the form-meaning relationships seen between phonological parameters within each sign language, as well as between the two languages. This shows that while the connection between form and meaning is not entirely language specific, there are cross-linguistic differences in how these mappings are realized for signs in each language, suggesting that arbitrariness as well as cognitive or cultural influences may play a role in how these patterns are realized. The results of this analysis not only contribute to our understanding of the distribution of non-arbitrariness in sign language lexicons, but also demonstrate a new way that computational modeling can be harnessed in lexicon-wide investigations of sign languages.
Collapse
|
47
|
Historical Linguistics of Sign Languages: Progress and Problems. Front Psychol 2022; 13:818753. [PMID: 35356353 PMCID: PMC8959496 DOI: 10.3389/fpsyg.2022.818753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 01/27/2022] [Indexed: 11/13/2022] Open
Abstract
In contrast to scholars and signers in the nineteenth century, William Stokoe conceived of American Sign Language (ASL) as a unique linguistic tradition with roots in nineteenth-century langue des signes française, a conception that is apparent in his earliest scholarship on ASL. Stokoe thus contributed to the theoretical foundations upon which the field of sign language historical linguistics would later develop. This review focuses on the development of sign language historical linguistics since Stokoe, including the field's significant progress and the theoretical and methodological problems that it still faces. The review examines the field's development through the lens of two related problems pertaining to how we understand sign language relationships and to our understanding of cognacy, as the term pertains to signs. It is suggested that the theoretical notions underlying these terms do not straightforwardly map onto the historical development of many sign languages. Recent approaches in sign language historical linguistics are highlighted and future directions for research are suggested to address the problems discussed in this review.
Collapse
|
48
|
Tracking the time course of sign recognition using ERP repetition priming. Psychophysiology 2022; 59:e13975. [PMID: 34791683 PMCID: PMC9583460 DOI: 10.1111/psyp.13975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 09/28/2021] [Accepted: 11/03/2021] [Indexed: 11/26/2022]
Abstract
Repetition priming and event-related potentials (ERPs) were used to investigate the time course of sign recognition in deaf users of American Sign Language. Signers performed a go/no-go semantic categorization task to rare probe signs referring to people; critical target items were repeated and unrelated signs. In Experiment 1, ERPs were time-locked either to the onset of the video or to sign onset within the video; in Experiment 2, the same full videos were clipped so that video and sign onset were aligned (removing transitional movements), and ERPs were time-locked to video/sign onset. All analyses revealed an N400 repetition priming effect (less negativity for repeated than unrelated signs) but differed in the timing and/or duration of the N400 effect. Results from Experiment 1 revealed that repetition priming effects began before sign onset within a video, suggesting that signers are sensitive to linguistic information within the transitional movement to sign onset. The timing and duration of the N400 for clipped videos were more parallel to that observed previously for auditorily presented words and was 200 ms shorter than either time-locking analysis from Experiment 1. We conclude that time-locking to full video onset is optimal when early ERP components or sensitivity to transitional movements are of interest and that time-locking to the onset of clipped videos is optimal for priming studies with fluent signers.
Collapse
|
49
|
Attitudes Toward Signing Avatars Vary Depending on Hearing Status, Age of Signed Language Acquisition, and Avatar Type. Front Psychol 2022; 13:730917. [PMID: 35222173 PMCID: PMC8866438 DOI: 10.3389/fpsyg.2022.730917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 01/17/2022] [Indexed: 12/04/2022] Open
Abstract
The use of virtual humans (i.e., avatars) holds the potential for interactive, automated interaction in domains such as remote communication, customer service, or public announcements. For signed language users, signing avatars could potentially provide accessible content by sharing information in the signer's preferred or native language. As the development of signing avatars has gained traction in recent years, researchers have come up with many different methods of creating signing avatars. The resulting avatars vary widely in their appearance, the naturalness of their movements, and facial expressions—all of which may potentially impact users' acceptance of the avatars. We designed a study to test the effects of these intrinsic properties of different signing avatars while also examining the extent to which people's own language experiences change their responses to signing avatars. We created video stimuli showing individual signs produced by (1) a live human signer (Human), (2) an avatar made using computer-synthesized animation (CS Avatar), and (3) an avatar made using high-fidelity motion capture (Mocap avatar). We surveyed 191 American Sign Language users, including Deaf (N = 83), Hard-of-Hearing (N = 34), and Hearing (N = 67) groups. Participants rated the three signers on multiple dimensions, which were then combined to form ratings of Attitudes, Impressions, Comprehension, and Naturalness. Analyses demonstrated that the Mocap avatar was rated significantly more positively than the CS avatar on all primary variables. Correlations revealed that signers who acquire sign language later in life are more accepting of and likely to have positive impressions of signing avatars. Finally, those who learned ASL earlier were more likely to give lower, more negative ratings to the CS avatar, but we did not see this association for the Mocap avatar or the Human signer. Together, these findings suggest that movement quality and appearance significantly impact users' ratings of signing avatars and show that signed language users with earlier age of ASL acquisition are the most sensitive to movement quality issues seen in computer-generated avatars. We suggest that future efforts to develop signing avatars consider retaining the fluid movement qualities integral to signed languages.
Collapse
|
50
|
A Multilingual App for Providing Information to SARS-CoV-2 Vaccination Candidates with Limited Language Proficiency: Development and Pilot. Vaccines (Basel) 2022; 10:360. [PMID: 35334992 PMCID: PMC8955787 DOI: 10.3390/vaccines10030360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/24/2022] [Accepted: 02/23/2022] [Indexed: 02/04/2023] Open
Abstract
Language barriers are obstacles in receiving vaccinations against COVID-19. They jeopardize informed consent, vaccination safety, and a positive immunization experience. We have developed a multilingual app to overcome language barriers when dealing with vaccination candidates with a limited proficiency in the locally spoken language. We applied the Spiral Technology Action Research (STAR) model to create the app within a discursive process involving healthcare professionals (HCPs) from vaccination sites, literature searches and guidelines, and field trials at vaccination centers. In a real-world pilot test, we assessed the usability and feedback for further improvement. Our efforts resulted in an app that facilitates communication with vaccination candidates in 40 languages, each with over 500 phrases that can be played back or displayed as text. In the pilot test, the app demonstrated its usability, and was well accepted by the vaccination candidates (n = 20). The app was mainly used to inform about the risks and benefits of the SARS-CoV-2 vaccination. Some HCPs struggled to navigate the comprehensive content and the pilot test exposed the need for additional phrases. The STAR model proved to be flexible in adapting to dynamic pandemic conditions and changing recommendations. This multilingual app overcomes language barriers in healthcare settings, promoting vaccines to migrants with limited language proficiency.
Collapse
|