1
|
Stamp R, Cohn D, Hel-Or H, Sandler W. Kinect-ing the Dots: Using Motion-Capture Technology to Distinguish Sign Language Linguistic From Gestural Expressions. LANGUAGE AND SPEECH 2024; 67:255-276. [PMID: 37313985 DOI: 10.1177/00238309231169502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Just as vocalization proceeds in a continuous stream in speech, so too do movements of the hands, face, and body in sign languages. Here, we use motion-capture technology to distinguish lexical signs in sign language from other common types of expression in the signing stream. One type of expression is constructed action, the enactment of (aspects of) referents and events by (parts of) the body. Another is classifier constructions, the manual representation of analogue and gradient motions and locations simultaneously with specified referent morphemes. The term signing is commonly used for all of these, but we show that not all visual signals in sign languages are of the same type. In this study of Israeli Sign Language, we use motion capture to show that the motion of lexical signs differs significantly along several kinematic parameters from that of the two other modes of expression: constructed action and the classifier forms. In so doing, we show how motion-capture technology can help to define the universal linguistic category "word," and to distinguish it from the expressive gestural elements that are commonly found across sign languages.
Collapse
Affiliation(s)
- Rose Stamp
- Department of English Literature and Linguistics, Bar-Ilan University, Israel
| | | | - Hagit Hel-Or
- Department of Computer Science, University of Haifa, Israel
| | - Wendy Sandler
- Sign Language Research Lab, University of Haifa, Israel
| |
Collapse
|
2
|
van der Burght CL, Friederici AD, Maran M, Papitto G, Pyatigorskaya E, Schroën JAM, Trettenbrein PC, Zaccarella E. Cleaning up the Brickyard: How Theory and Methodology Shape Experiments in Cognitive Neuroscience of Language. J Cogn Neurosci 2023; 35:2067-2088. [PMID: 37713672 DOI: 10.1162/jocn_a_02058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
The capacity for language is a defining property of our species, yet despite decades of research, evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining "language" in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement among cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modeling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
Collapse
Affiliation(s)
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Matteo Maran
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Giorgio Papitto
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Elena Pyatigorskaya
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Joëlle A M Schroën
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Patrick C Trettenbrein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
- University of Göttingen, Göttingen, Germany
| | - Emiliano Zaccarella
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
3
|
Funayama M, Nakajima A. Development of Self-made Gestures as an Adaptive Strategy for Communication in an Individual With Childhood Apraxia of Speech. Cogn Behav Neurol 2023; 36:249-258. [PMID: 37724738 DOI: 10.1097/wnn.0000000000000354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 04/13/2023] [Indexed: 09/21/2023]
Abstract
Individuals with childhood apraxia of speech often exhibit greater difficulty with expressive language than with receptive language. As a result, they may benefit from alternative modes of communication. Here, we present a patient with childhood apraxia of speech who used pointing as a means of communication at age 2 ¼ years and self-made gestures at age 3½, when he had severe difficulties speaking in spite of probable normal comprehension abilities. His original gestures included not only word-level expressions, but also sentence-length ones. For example, when expressing "I am going to bed," he pointed his index finger at himself (meaning I ) and then put both his hands together near his ear ( sleep ). When trying to convey the meaning of "I enjoyed the meal and am leaving," he covered his mouth with his right hand ( delicious ), then joined both of his hands in front of himself ( finish ) and finally waved his hands ( goodbye ). These original gestures and pointing peaked at the age of 4 and then subsided and completely disappeared by the age of 7, when he was able to make himself understood to some extent with spoken words. The present case demonstrates an adaptive strategy for communication that might be an inherent competence for human beings.
Collapse
Affiliation(s)
| | - Asuka Nakajima
- Rehabilitation, Ashikaga Red Cross Hospital, Tochigi, Japan
| |
Collapse
|
4
|
Emmorey K. Ten things you should know about sign languages. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:387-394. [PMID: 37829330 PMCID: PMC10568932 DOI: 10.1177/09637214231173071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
5
|
Luchkina E, Waxman S. Talking About the Absent and the Abstract: Referential Communication in Language and Gesture. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231180589. [PMID: 37603076 PMCID: PMC10879458 DOI: 10.1177/17456916231180589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
Human language permits us to call to mind objects, events, and ideas that we cannot witness directly, either because they are absent or because they have no physical form (e.g., people we have not met, concepts like justice). What enables language to transmit such knowledge? We propose that a referential link between words, referents, and mental representations of those referents is key. This link enables us to form, access, and modify mental representations even when the referents themselves are absent ("absent reference"). In this review we consider the developmental and evolutionary origins of absent reference, integrating previously disparate literatures on absent reference in language and gesture in very young humans and gesture in nonhuman primates. We first evaluate when and how infants acquire absent reference during the process of language acquisition. With this as a foundation, we consider the evidence for absent reference in gesture in infants and in nonhuman primates. Finally, having woven these literatures together, we highlight new lines of research that promise to sharpen our understanding of the development of reference and its role in learning about the absent and the abstract.
Collapse
Affiliation(s)
- Elena Luchkina
- Department of Psychology, Northwestern University, Evanston, IL, United States of America
- Institute of Policy Research, Northwestern University, Evanston, IL, United States of America
| | - Sandra Waxman
- Department of Psychology, Northwestern University, Evanston, IL, United States of America
- Institute of Policy Research, Northwestern University, Evanston, IL, United States of America
| |
Collapse
|
6
|
Goppelt-Kunkel M, Stroh AL, Hänel-Faulhaber B. Sign learning of hearing children in inclusive day care centers-does iconicity matter? Front Psychol 2023; 14:1196114. [PMID: 37655202 PMCID: PMC10467423 DOI: 10.3389/fpsyg.2023.1196114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/22/2023] [Indexed: 09/02/2023] Open
Abstract
An increasing number of experimental studies suggest that signs and gestures can scaffold vocabulary learning for children with and without special educational needs and/or disabilities (SEND). However, little research has been done on the extent to which iconicity plays a role in sign learning, particularly in inclusive day care centers. This current study investigated the role of iconicity in the sign learning of 145 hearing children (2;1 to 6;3 years) from inclusive day care centers with educators who started using sign-supported speech after a training module. Children's sign use was assessed via a questionnaire completed by their educators. We found that older children were more likely to learn signs with a higher degree of iconicity, whereas the learning of signs by younger children was less affected by iconicity. Children with SEND did not benefit more from iconicity than children without SEND. These results suggest that whether iconicity plays a role in sign learning depends on the age of the children.
Collapse
Affiliation(s)
- Madlen Goppelt-Kunkel
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
| | - Anna-Lena Stroh
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
- Faculty of Psychology, Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Barbara Hänel-Faulhaber
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
| |
Collapse
|
7
|
Bradley C, Wilbur R. Visual Form and Event Semantics Predict Transitivity in Silent Gestures: Evidence for Compositionality. Cogn Sci 2023; 47:e13331. [PMID: 37635624 DOI: 10.1111/cogs.13331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023]
Abstract
Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy's cognitive foundation.
Collapse
Affiliation(s)
| | - Ronnie Wilbur
- Department of Linguistics, Purdue University
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
8
|
Huang Y, Du J, Guo X, Li Y, Wang H, Xu J, Xu S, Wang Y, Zhang R, Xiao L, Su T, Tang Y. Insomnia and impacts on facial expression recognition accuracy, intensity and speed: A meta-analysis. J Psychiatr Res 2023; 160:248-257. [PMID: 36870234 DOI: 10.1016/j.jpsychires.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 01/05/2023] [Accepted: 02/03/2023] [Indexed: 02/05/2023]
Abstract
Facial expressions provide nonverbal cues that are important for delivering and interpreting human emotions. Previous studies have shown that the ability to interpret facial emotions correctly could be partially impaired in sleep-deprived people. People with insomnia might also suffer from sleep loss, so we assumed that facial expression recognition ability might also be impaired in people with insomnia. Despite a growing body of research exploring insomnia's potential impacts on facial expression recognition, conflicting results have been reported, and no systematic review of this research body has been conducted. In this study, after screening 1100 records identified through database searches, six articles examining insomnia and facial expression recognition ability were included in a quantitative synthesis. The main outcomes were classification accuracy (ACC), reaction time (RT), and intensity rating-the three most studied facial expression processing variables. Subgroup analysis was performed to identify altered perceptions according to the facial expressions of four emotions-happiness, sadness, fear, and anger-used to examine insomnia and emotion recognition. The pooled standard mean differences (SMDs) and corresponding 95% confidence intervals (CIs) demonstrated that facial expression recognition among people with insomnia was less accurate (SMD = -0.30; 95% CI: -0.46, -0.14) and slower (SMD = 0.67; 95% CI: 0.18, -1.15) compared to good sleepers. The classification ACC of fearful expression was lower in the insomnia group (SMD = -0.66; 95% CI: -1.02, -0.30). This meta-analysis was registered using PROSPERO.
Collapse
Affiliation(s)
- Yujia Huang
- Psychology Department, The Second Naval Hospital of Southern Theater Command, Sanya, China; Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Jing Du
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Xin Guo
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Yinan Li
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Hao Wang
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Jingzhou Xu
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Shuyu Xu
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Yajing Wang
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Ruike Zhang
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Lei Xiao
- Department of Medical Psychology, Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China.
| | - Tong Su
- Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China.
| | - Yunxiang Tang
- Faculty of Psychology, Naval Medical University (Second Military Medical University), Shanghai, China.
| |
Collapse
|
9
|
Goffman L, Factor L, Barna M, Cai F, Feld I. Phonological and Articulatory Deficits in the Production of Novel Signs in Children With Developmental Language Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1051-1067. [PMID: 36795546 PMCID: PMC10205102 DOI: 10.1044/2022_jslhr-22-00434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/30/2022] [Accepted: 11/23/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE Sign language, like spoken language, incorporates phonological and articulatory (or motor) processing components. Thus, the learning of novel signs, like novel spoken word forms, may be problematic for children with developmental language disorder (DLD). In the present work, we hypothesize that phonological and articulatory deficits in novel sign repetition and learning would differentiate preschool-age children with DLD from their typical peers. METHOD Children with DLD (n = 34; aged 4-5 years) and their age-matched typical peers (n = 21) participated. Children were exposed to four novel signs, all iconic, but only two linked to a visual referent. Children imitatively produced these novel signs multiple times. We obtained measures of phonological accuracy and articulatory motion stability as well as of learning of the associated visual referent. RESULTS Children with DLD showed an increased number of phonological feature (i.e., handshape, path, and orientation of the hands) errors when compared with their typical peers. While articulatory variability did not overall differentiate children with DLD from typical peers, children with DLD showed instability in one novel sign that obligated bimanual oppositional movement. Semantic aspects of novel sign learning were unaffected in children with DLD. CONCLUSIONS Deficits that have been documented in phonological organization of spoken words in children with DLD are also evident in the manual domain. Analyses of hand motion variability suggest that children with DLD do not show a generalized motor deficit, but one that is restricted to the implementation of coordinated and sequential hand motion.
Collapse
Affiliation(s)
| | | | - Mitchell Barna
- Ann & Robert H. Lurie Children's Hospital of Chicago, IL
| | | | - Ilana Feld
- Department of Communication Sciences and Disorders, Elmhurst University, IL
| |
Collapse
|
10
|
Berent I, Gervain J. Speakers aren't blank slates (with respect to sign-language phonology)! Cognition 2023; 232:105347. [PMID: 36528980 DOI: 10.1016/j.cognition.2022.105347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
Collapse
Affiliation(s)
| | - Judit Gervain
- INCC, CNRS & Université Paris Cité, Paris, France; DPSS, University of Padua, Italy
| |
Collapse
|
11
|
Cognitive pragmatics: Insights from homesign conversations. Behav Brain Sci 2023; 46:e8. [PMID: 36799049 DOI: 10.1017/s0140525x22000826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
Homesign is a visual-gestural form of communication that emerges between deaf individuals and their hearing interlocutors in the absence of a conventional sign language. I argue here that homesign conversations form a perfect testcase to study the extent to which pragmatic competence is foundational rather than derived from our linguistic abilities.
Collapse
|
12
|
The Temporal Alignment of Speech-Accompanying Eyebrow Movement and Voice Pitch: A Study Based on Late Night Show Interviews. Behav Sci (Basel) 2023; 13:bs13010052. [PMID: 36661624 PMCID: PMC9854528 DOI: 10.3390/bs13010052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/30/2022] [Indexed: 01/08/2023] Open
Abstract
Previous research has shown that eyebrow movement during speech exhibits a systematic relationship with intonation: brow raises tend to be aligned with pitch accents, typically preceding them. The present study approaches the question of temporal alignment between brow movement and intonation from a new angle. The study makes use of footage from the Late Night Show with David Letterman, processed with 3D facial landmark detection. Pitch is modeled as a sinusoidal function whose parameters are correlated with the maximum height of the eyebrows in a brow raise. The results confirm some previous findings on audiovisual prosody but lead to new insights as well. First, the shape of the pitch signal in a region of approx. 630 ms before the brow raise is not random and tends to display a specific shape. Second, while being less informative than the post-peak pitch, the pitch signal in the pre-peak region also exhibits correlations with the magnitude of the associated brow raises. Both of these results point to early preparatory action in the speech signal, calling into question the visual-precedes-acoustic assumption. The results are interpreted as supporting a unified view of gesture/speech co-production that regards both signals as manifestations of a single communicative act.
Collapse
|
13
|
Sato Y, Nishimaru H, Matsumoto J, Setogawa T, Nishijo H. Electroencephalographic Effective Connectivity Analysis of the Neural Networks during Gesture and Speech Production Planning in Young Adults. Brain Sci 2023; 13:brainsci13010100. [PMID: 36672081 PMCID: PMC9856316 DOI: 10.3390/brainsci13010100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/19/2022] [Accepted: 12/29/2022] [Indexed: 01/06/2023] Open
Abstract
Gestures and speech, as linked communicative expressions, form an integrated system. Previous functional magnetic resonance imaging studies have suggested that neural networks for gesture and spoken word production share similar brain regions consisting of fronto-temporo-parietal brain regions. However, information flow within the neural network may dynamically change during the planning of two communicative expressions and also differ between them. To investigate dynamic information flow in the neural network during the planning of gesture and spoken word generation in this study, participants were presented with spatial images and were required to plan the generation of gestures or spoken words to represent the same spatial situations. The evoked potentials in response to spatial images were recorded to analyze the effective connectivity within the neural network. An independent component analysis of the evoked potentials indicated 12 clusters of independent components, the dipoles of which were located in the bilateral fronto-temporo-parietal brain regions and on the medial wall of the frontal and parietal lobes. Comparison of effective connectivity indicated that information flow from the right middle cingulate gyrus (MCG) to the left supplementary motor area (SMA) and from the left SMA to the left precentral area increased during gesture planning compared with that of word planning. Furthermore, information flow from the right MCG to the left superior frontal gyrus also increased during gesture planning compared with that of word planning. These results suggest that information flow to the brain regions for hand praxis is more strongly activated during gesture planning than during word planning.
Collapse
Affiliation(s)
- Yohei Sato
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
| | - Hiroshi Nishimaru
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama 930-0194, Japan
| | - Jumpei Matsumoto
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama 930-0194, Japan
| | - Tsuyoshi Setogawa
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama 930-0194, Japan
| | - Hisao Nishijo
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama 930-0194, Japan
- Correspondence:
| |
Collapse
|
14
|
Karadöller DZ, Sümer B, Ünal E, Özyürek A. Sign advantage: Both children and adults' spatial expressions in sign are more informative than those in speech and gestures combined. JOURNAL OF CHILD LANGUAGE 2022:1-27. [PMID: 36510476 DOI: 10.1017/s0305000922000642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children's co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers' spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
Collapse
Affiliation(s)
- Dilay Z Karadöller
- Max Planck Institute for Psycholinguistics, Netherlands
- Centre for Language Studies, Radboud University, Netherlands
| | - Beyza Sümer
- Max Planck Institute for Psycholinguistics, Netherlands
- Amsterdam Center for Language and Communication, University of Amsterdam, Netherlands
| | - Ercenur Ünal
- Department of Psychology, Ozyegin University, Istanbul, Turkey
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Netherlands
- Centre for Language Studies, Radboud University, Netherlands
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Netherlands
| |
Collapse
|
15
|
Morin O. The puzzle of ideography. Behav Brain Sci 2022; 46:e233. [PMID: 36254782 DOI: 10.1017/s0140525x22002801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
Collapse
Affiliation(s)
- Olivier Morin
- Max Planck Institute for Geoanthropology, Minds and Traditions Research Group, Jena, Germany ; https://www.shh.mpg.de/94549/themintgroup
- Institut Jean Nicod, CNRS, ENS, PSL University, Paris, France
| |
Collapse
|
16
|
Pleyer M, Lepic R, Hartmann S. Compositionality in Different Modalities: A View from Usage-Based Linguistics. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-022-00330-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractThe field of linguistics concerns itself with understanding the human capacity for language. Compositionality is a key notion in this research tradition. Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts. However, the question as to whether compositionality is a defining feature of human language is a matter of debate: usage-based and constructionist approaches emphasize the pervasive role of idiomaticity in language, and argue that strict compositionality is the exception rather than the rule. We review the major discussion points on compositionality from a usage-based point of view, taking both spoken and signed languages into account. In addition, we discuss theories that aim at accounting for the emergence of compositional language through processes of cultural transmission as well as the debate of whether animal communication systems exhibit compositionality. We argue for a view that emphasizes the analyzability of complex linguistic units, providing a template for accounting for the multimodal nature of human language.
Collapse
|
17
|
Bosworth RG, Hwang SO, Corina DP. Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children. Front Psychol 2022; 13:951057. [PMID: 36160576 PMCID: PMC9505519 DOI: 10.3389/fpsyg.2022.951057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Evidence from adult studies of deaf signers supports the dissociation between neural systems involved in processing visual linguistic and non-linguistic body actions. The question of how and when this specialization arises is poorly understood. Visual attention to these forms is likely to change with age and be affected by prior language experience. The present study used eye-tracking methodology with infants and children as they freely viewed alternating video sequences of lexical American sign language (ASL) signs and non-linguistic body actions (self-directed grooming action and object-directed pantomime). In Experiment 1, we quantified fixation patterns using an area of interest (AOI) approach and calculated face preference index (FPI) values to assess the developmental differences between 6 and 11-month-old hearing infants. Both groups were from monolingual English-speaking homes with no prior exposure to sign language. Six-month-olds attended the signer’s face for grooming; but for mimes and signs, they were drawn to attend to the “articulatory space” where the hands and arms primarily fall. Eleven-month-olds, on the other hand, showed a similar attention to the face for all body action types. We interpret this to reflect an early visual language sensitivity that diminishes with age, just before the child’s first birthday. In Experiment 2, we contrasted 18 hearing monolingual English-speaking children (mean age of 4.8 years) vs. 13 hearing children of deaf adults (CODAs; mean age of 5.7 years) whose primary language at home was ASL. Native signing children had a significantly greater face attentional bias than non-signing children for ASL signs, but not for grooming and mimes. The differences in the visual attention patterns that are contingent on age (in infants) and language experience (in children) may be related to both linguistic specialization over time and the emerging awareness of communicative gestural acts.
Collapse
Affiliation(s)
- Rain G. Bosworth
- NTID PLAY Lab, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, United States
- *Correspondence: Rain G. Bosworth,
| | - So One Hwang
- Center for Research in Language, University of California, San Diego, San Diego, CA, United States
| | - David P. Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| |
Collapse
|
18
|
Pouw W, Fuchs S. Origins Of Vocal-Entangled Gesture. Neurosci Biobehav Rev 2022; 141:104836. [PMID: 36031008 DOI: 10.1016/j.neubiorev.2022.104836] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/12/2022] [Accepted: 08/21/2022] [Indexed: 01/13/2023]
Abstract
Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory-vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal-motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Susanne Fuchs
- Leibniz Center General Linguistics, Berlin, Germany.
| |
Collapse
|
19
|
Royka A, Chen A, Aboody R, Huanca T, Jara-Ettinger J. People infer communicative action through an expectation for efficient communication. Nat Commun 2022; 13:4160. [PMID: 35851397 PMCID: PMC9293910 DOI: 10.1038/s41467-022-31716-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 06/30/2022] [Indexed: 11/09/2022] Open
Abstract
Humans often communicate using body movements like winks, waves, and nods. However, it is unclear how we identify when someone’s physical actions are communicative. Given people’s propensity to interpret each other’s behavior as aimed to produce changes in the world, we hypothesize that people expect communicative actions to efficiently reveal that they lack an external goal. Using computational models of goal inference, we predict that movements that are unlikely to be produced when acting towards the world and, in particular, repetitive ought to be seen as communicative. We find support for our account across a variety of paradigms, including graded acceptability tasks, forced-choice tasks, indirect prompts, and open-ended explanation tasks, in both market-integrated and non-market-integrated communities. Our work shows that the recognition of communicative action is grounded in an inferential process that stems from fundamental computations shared across different forms of action interpretation. Humans can quickly infer when someone’s body movements are meant to be communicative. Here, the authors show that this capacity is underpinned by an expectation that communicative actions will efficiently reveal that they lack an external goal.
Collapse
Affiliation(s)
- Amanda Royka
- Department of Psychology, Yale University, New Haven, CT, USA.
| | - Annie Chen
- Department of Computer Science, Yale University, New Haven, CT, USA
| | - Rosie Aboody
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Tomas Huanca
- Centro Boliviano de Desarrollo Socio-Integral, La paz, Bolivia
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, New Haven, CT, USA. .,Department of Computer Science, Yale University, New Haven, CT, USA. .,Wu Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
20
|
Abstract
Given the achievements in automatically translating text from one language to another, one would expect to see similar advancements in translating between signed and spoken languages. However, progress in this effort has lagged in comparison. Typically, machine translation consists of processing text from one language to produce text in another. Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of a three-dimensional (3D) virtual human commonly known as an avatar. Researchers have been grappling with this problem for over twenty years, and it is still an open question. With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology underlying avatars and performs an in-depth analysis of the features in a textual representation for avatar display. It concludes with a comparison of these features and makes observations about future research directions.
Collapse
|
21
|
How and When to Sign “Hey!” Socialization into Grammar in Z, a 1st Generation Family Sign Language from Mexico. LANGUAGES 2022. [DOI: 10.3390/languages7020080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
“Z” is a young sign language developing in a family whose hearing members speak Tzotzil (Mayan). Three deaf siblings, together with an intervening hearing sister and a hearing niece, formed the original cohort of signing adults. A hearing son of the original signer became the first native signer of a second generation. Z provides evidence for a classic grammaticalization chain linking a sign requesting attention (HEY1) to a pragmatic turn-initiating particle (HEY2), which signals a new utterance or change of topic. Such an emergent grammatical particle linked to the pragmatic exigencies of communication is a primordial example of emergent grammar. The chapter presents the stages in the son’s language socialization and acquisition of HEY1 and HEY2, starting at 11 months, through his subsequent bilingual development in both Z and Tzotzil, jointly deploying other communicative modalities such as gaze and touch. It proposes a series of stages leading, by 4 years of age, to his understanding of the complex sequential structure that using the sign involves. Acquiring pragmatic signs such as HEY in Z demonstrates how the grammar of a language, including an emergent sign language, is built upon the practices of a language community and the basic expected parameters of local social life.
Collapse
|
22
|
Factor L, Goffman L. Phonological characteristics of novel gesture production in children with developmental language disorder: Longitudinal findings. APPLIED PSYCHOLINGUISTICS 2022; 43:333-362. [PMID: 35342208 PMCID: PMC8955622 DOI: 10.1017/s0142716421000540] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Children with developmental language disorder (DLD; aka specific language impairment) are characterized based on deficits in language, especially morphosyntax, in the absence of other explanatory conditions. However, deficits in speech production, as well as fine and gross motor skill, have also been observed, implicating both the linguistic and motor systems. Situated at the intersection of these domains, and providing insight into both, is manual gesture. In the current work, we asked whether children with DLD showed phonological deficits in the production of novel gestures and whether gesture production at 4 years of age is related to language and motor outcomes two years later. Twenty-eight children (14 with DLD) participated in a two-year longitudinal novel gesture production study. At the first and final time points, language and fine motor skills were measured and gestures were analyzed for phonological feature accuracy, including handshape, path, and orientation. Results indicated that, while early deficits in phonological accuracy did not persist for children with DLD, all children struggled with orientation while handshape was the most accurate. Early handshape and orientation accuracy were also predictive of later language skill, but only for the children with DLD. Theoretical and clinical implications of these findings are discussed.
Collapse
|
23
|
Abdullahi SB, Chamnongthai K. American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM. SENSORS (BASEL, SWITZERLAND) 2022; 22:1406. [PMID: 35214309 PMCID: PMC8963088 DOI: 10.3390/s22041406] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
Complex hand gesture interactions among dynamic sign words may lead to misclassification, which affects the recognition accuracy of the ubiquitous sign language recognition system. This paper proposes to augment the feature vector of dynamic sign words with knowledge of hand dynamics as a proxy and classify dynamic sign words using motion patterns based on the extracted feature vector. In this method, some double-hand dynamic sign words have ambiguous or similar features across a hand motion trajectory, which leads to classification errors. Thus, the similar/ambiguous hand motion trajectory is determined based on the approximation of a probability density function over a time frame. Then, the extracted features are enhanced by transformation using maximal information correlation. These enhanced features of 3D skeletal videos captured by a leap motion controller are fed as a state transition pattern to a classifier for sign word classification. To evaluate the performance of the proposed method, an experiment is performed with 10 participants on 40 double hands dynamic ASL words, which reveals 97.98% accuracy. The method is further developed on challenging ASL, SHREC, and LMDHG data sets and outperforms conventional methods by 1.47%, 1.56%, and 0.37%, respectively.
Collapse
Affiliation(s)
- Sunusi Bala Abdullahi
- Department of Computer Engineering, Faculty of Engineering, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand;
- Zonal Criminal Investigation Department, The Nigeria Police, Louis Edet House Force Headquarters, Shehu Shagari Way, Abuja 900221, Nigeria
| | - Kosin Chamnongthai
- Department of Electronic and Telecommunication Engineering, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand
| |
Collapse
|
24
|
Pasternak R, Tieu L. EXPRESS: Co-linguistic content inferences: From gestures to sound effects and emoji. Q J Exp Psychol (Hove) 2022; 75:1828-1843. [PMID: 35114858 DOI: 10.1177/17470218221080645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Among other uses, co-speech gestures can contribute additional semantic content to the spoken utterances with which they coincide. A growing body of research is dedicated to understanding how inferences from gestures interact with logical operators in speech, including negation ("not"/"n't"), modals (e.g., "might"), and quantifiers (e.g., "each", "none", "exactly one"). A related but less-addressed question is what kinds of meaningful content other than gestures can evince this same behavior; this is in turn connected to the much broader question of what properties of gestures are responsible for how they interact with logical operators. We present two experiments investigating sentences with co-speech sound effects and co-text emoji in lieu of gestures, revealing a remarkably similar inference pattern to that of co-speech gestures. The results suggest that gestural inferences do not behave the way they do because of any traits specific to gestures, and that the inference pattern extends to a much broader range of content.
Collapse
Affiliation(s)
| | - Lyn Tieu
- Western Sydney University, Office of the Pro Vice-Chancellor (Research & Innovation), Penrith, Australia 6489
| |
Collapse
|
25
|
Capirci O, Bonsignori C, Di Renzo A. Signed Languages: A Triangular Semiotic Dimension. Front Psychol 2022; 12:802911. [PMID: 35095689 PMCID: PMC8792841 DOI: 10.3389/fpsyg.2021.802911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/13/2021] [Indexed: 11/16/2022] Open
Abstract
Since the beginning of signed language research, the linguistic units have been divided into conventional, standard and fixed signs, all of which were considered as the core of the language, and iconic and productive signs, put at the edge of language. In the present paper, we will review different models proposed by signed language researchers over the years to describe the signed lexicon, showing how to overcome the hierarchical division between standard and productive lexicon. Drawing from the semiotic insights of Peirce we proposed to look at signs as a triadic construction built on symbolic, iconic, and indexical features. In our model, the different iconic, symbolic, and indexical features of signs are seen as the three sides of the same triangle, detectable in the single linguistic sign (Capirci, 2018; Puupponen, 2019). The key aspect is that the dominance of the feature will determine the different use of the linguistic unit, as we will show with examples from different discourse types (narratives, conference talks, poems, a theater monolog).
Collapse
Affiliation(s)
- Olga Capirci
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Rome, Italy
| | - Chiara Bonsignori
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Rome, Italy
- Department of Letters and Modern Cultures, Sapienza University of Rome, Rome, Italy
| | - Alessio Di Renzo
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Rome, Italy
| |
Collapse
|
26
|
Özyürek A. Considering the Nature of Multimodal Language from a Crosslinguistic Perspective. J Cogn 2021; 4:42. [PMID: 34514313 PMCID: PMC8396132 DOI: 10.5334/joc.165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 05/06/2021] [Indexed: 11/24/2022] Open
Abstract
Language in its primary face-to-face context is multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). Thus, understanding how expressions in the vocal and visual modalities together contribute to our notions of language structure, use, processing, and transmission (i.e., acquisition, evolution, emergence) in different languages and cultures should be a fundamental goal of language sciences. This requires a new framework of language that brings together how arbitrary and non-arbitrary and motivated semiotic resources of language relate to each other. Current commentary evaluates such a proposal by Murgiano et al (2021) from a crosslinguistic perspective taking variation as well as systematicity in multimodal utterances into account.
Collapse
Affiliation(s)
- Asli Özyürek
- Donders Institute Brain, Cognition and Behavior, Center for Language Studies, Radboud University and Max Planck Institute for Psycholinguistics, NL
| |
Collapse
|
27
|
Senghas A. Connecting Language Acquisition and Language Evolution. MINNESOTA SYMPOSIA ON CHILD PSYCHOLOGY 2021. [DOI: 10.1002/9781119684527.ch3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
28
|
|
29
|
Wright B, Phillips H, Allgar V, Sweetman J, Hodkinson R, Hayward E, Ralph-Lewis A, Teige C, Bland M, Le Couteur A. Adapting and validating the Autism Diagnostic Interview - Revised for use with deaf children and young people. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2021; 26:446-459. [PMID: 34269085 DOI: 10.1177/13623613211029116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT Autism assessment processes need to improve for deaf children as they are currently being diagnosed later than their hearing counterparts and misdiagnosis can occur. We took one of the most commonly used parent developmental interviews for autism spectrum disorder the Autism Diagnostic Interview-Revised and adapted it using international expert advice. Modifications were proposed and agreed by the expert panel for 45% of items; the remaining 55% of items were unchanged. We then tested the revised version, adapted for deaf children (Autism Diagnostic Interview-Revised Deaf Adaptation), in a UK sample of 78 parents/carers of deaf children with autism spectrum disorder and 126 parents/carers with deaf children without autism spectrum disorder. When compared to National Institute for Health and Care Excellence guideline standard clinical assessments, the Autism Diagnostic Interview-Revised Deaf Adaptation diagnostic algorithm threshold scores could identify those deaf children with a definite diagnosis (true autism spectrum disorder positives) well (sensitivity of 89% (79%-96%)) and those deaf children who did not have autism spectrum disorder (true autism spectrum disorder negatives) well (specificity of 81% (70%-89%)). Our findings indicate that the Autism Diagnostic Interview-Revised Deaf Adaptation is likely to prove a useful measure for the assessment of deaf children with suspected autism spectrum disorder and that further research would be helpful.
Collapse
Affiliation(s)
- Barry Wright
- University of York, York, UK.,Leeds and York Partnership NHS Foundation Trust, UK
| | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Novack MA, Brentari D, Goldin-Meadow S, Waxman S. Sign language, like spoken language, promotes object categorization in young hearing infants. Cognition 2021; 215:104845. [PMID: 34273677 DOI: 10.1016/j.cognition.2021.104845] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 04/19/2021] [Accepted: 07/07/2021] [Indexed: 11/18/2022]
Abstract
The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.
Collapse
Affiliation(s)
- Miriam A Novack
- Department of Medical Social Sciences, Northwestern University, Chicago, IL, United States of America; Department of Psychology, Northwestern University, Evanston, IL, United States of America.
| | - Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, IL, United States of America
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago, Chicago, IL, United States of America
| | - Sandra Waxman
- Department of Psychology, Northwestern University, Evanston, IL, United States of America
| |
Collapse
|
31
|
Schüler M, Stroh AL, Hänel-Faulhaber B. Gebärden in inklusiven Kitas – erste Ergebnisse einer Langzeitstudie. SPRACHE · STIMME · GEHÖR 2021. [DOI: 10.1055/a-1169-3861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Zusammenfassung
Hintergrund Um den heterogenen kommunikativen Bedürfnissen gerecht zu werden, wurde in inklusiven Kitas Kommunikation mit Gebärden eingeführt. Diese Studie untersucht erstmals in dieser natürlichen Lernumgebung, ob Kinder unterschiedlichen Alters und mit unterschiedlichen Sprachentwicklungsständen Gebärden durch Modeling erwerben.
Methode Mit Fragebögen wurden Daten von 289 Kindern vor der Implementierung sowie 6 Monate später erhoben.
Ergebnisse Nach 6 Monaten zeigte sich ein signifikanter Aufbau des aktiven Gebärdenwortschatzes. Signifikante Effekte auf dessen Größe ergaben sich für die Faktoren Sprachentwicklungsstand und Stärke der Implementierung. Ein signifikanter Effekt des Alters konnte nicht beobachtet werden.
Diskussion Kinder unterschiedlichen Alters und mit unterschiedlichen expressiven sprachlichen Fähigkeiten erwerben durch Modeling in inklusiven Kitas Gebärden. Dabei sind Kinder, die in der Sprachentwicklung weiter vorangeschritten sind, leicht im Vorteil.
Collapse
Affiliation(s)
- Maren Schüler
- Universität Hamburg, Fakultät für Erziehungswissenschaft
| | | | | |
Collapse
|
32
|
Loos C, Napoli DJ. Expanding Echo: Coordinated Head Articulations as Nonmanual Enhancements in Sign Language Phonology. Cogn Sci 2021; 45:e12958. [PMID: 34018245 DOI: 10.1111/cogs.12958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 11/09/2020] [Accepted: 02/06/2021] [Indexed: 11/28/2022]
Abstract
Echo phonology was originally proposed to account for obligatory coordination of manual and mouth articulations observed in several sign languages. However, previous research into the phenomenon lacks clear criteria for which components of movement can or must be copied when the articulators are so different. Nor is there discussion of which nonmanual articulators can echo manual movement. Given the prosodic properties of echoes (coordination of onset/offset and of dynamics such as speed) as well as general motoric coordination of various articulators in the human body, we expect that the mouth is not the only nonmanual articulator involved in echo phonology. In this study, we look at a fixed set of lexical items across 36 sign languages and establish that the head can echo manual movement with respect to timing and to the axis/axes of manual movement. We propose that what matters in echo phonology is the visual percept of temporally coordinated movement that repeats a salient movement property in such a way as to give the visual impression of a copy. Our findings suggest that echoes are not obligatory motor couplings of two or more articulators but may enhance phonological distinctions that are otherwise difficult to see.
Collapse
Affiliation(s)
- Cornelia Loos
- Institut für Deutsche Gebärdensprache, Universität Hamburg
| | | |
Collapse
|
33
|
Wicke P, Veale T. Creative Action at a Distance: A Conceptual Framework for Embodied Performance With Robotic Actors. Front Robot AI 2021; 8:662182. [PMID: 33996928 PMCID: PMC8120109 DOI: 10.3389/frobt.2021.662182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 04/12/2021] [Indexed: 11/25/2022] Open
Abstract
Acting, stand-up and dancing are creative, embodied performances that nonetheless follow a script. Unless experimental or improvised, the performers draw their movements from much the same stock of embodied schemas. A slavish following of the script leaves no room for creativity, but active interpretation of the script does. It is the choices one makes, of words and actions, that make a performance creative. In this theory and hypothesis article, we present a framework for performance and interpretation within robotic storytelling. The performance framework is built upon movement theory, and defines a taxonomy of basic schematic movements and the most important gesture types. For the interpretation framework, we hypothesise that emotionally-grounded choices can inform acts of metaphor and blending, to elevate a scripted performance into a creative one. Theory and hypothesis are each grounded in empirical research, and aim to provide resources for other robotic studies of the creative use of movement and gestures.
Collapse
Affiliation(s)
- Philipp Wicke
- School of Computer Science, University College Dublin, Belfield, Ireland
| | - Tony Veale
- School of Computer Science, University College Dublin, Belfield, Ireland
| |
Collapse
|
34
|
Romano M, Eugenio J, Kiratzis E. Coaching Childcare Providers to Support Toddlers' Gesture Use With Children Experiencing Early Childhood Poverty. Lang Speech Hear Serv Sch 2021; 52:686-701. [PMID: 33788592 DOI: 10.1044/2020_lshss-20-00112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The purpose of this study is to examine the impact of an intervention in which childcare providers (CCPs) are coached to support toddlers' gesture use during every day classroom routines. Method This study uses a multiple-baseline across strategies single-case experimental design to examine the impact of a coaching intervention on three CCPs' use of communication strategies with toddlers experiencing early childhood poverty. The CCPs were coached with a systematic framework called Setting the Stage, Observation and Opportunities to Embed, Problem-solving and Planning, Reflection and Review as they learned to implement three strategies to support toddlers' gesture use-modeling gestures with a short phrase, opportunities to gesture, and responding/expanding child gestures. CCPs were coached during book sharing and another classroom routine of their choice. Social validity data on the coaching approach and on the intervention strategies were gathered from postintervention interviews. Results The visual analysis and nonoverlap of all pairs' effect size indicates that the coaching intervention had a functional relation with CCPs' use of modeling gestures and responding/expanding gestures during book sharing, play, and circle time. Social validity data indicate that CCPs found the coaching framework supportive of their learning and feelings of self-efficacy, and that the intervention strategies supported their toddlers' communication. Conclusions The coaching framework was used to increase CCP strategy use during everyday classroom routines with toddlers. CCPs endorsed the coaching approach and the intervention strategies. This study adds to the literature supporting efforts to enhance children's earliest language learning environments. Supplemental Material https://doi.org/10.23641/asha.14044055.
Collapse
Affiliation(s)
- Mollie Romano
- Communication and Early Childhood Research and Practice Center, School of Communication Science and Disorders, Florida State University, Tallahassee
| | - Johanna Eugenio
- Communication and Early Childhood Research and Practice Center, School of Communication Science and Disorders, Florida State University, Tallahassee
| | - Edie Kiratzis
- Communication and Early Childhood Research and Practice Center, School of Communication Science and Disorders, Florida State University, Tallahassee
| |
Collapse
|
35
|
Etxepare R, Irurtzun A. Gravettian hand stencils as sign language formatives. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200205. [PMID: 33745310 DOI: 10.1098/rstb.2020.0205] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Several Upper Palaeolithic archaeological sites from the Gravettian period display hand stencils with missing fingers. On the basis of the stencils that Leroi-Gourhan identified in the cave of Gargas (France) in the late 1960s, we explore the hypothesis that those stencils represent hand signs with deliberate folding of fingers, intentionally projected as a negative figure onto the wall. Through a study of the biomechanics of handshapes, we analyse the articulatory effort required for producing the handshapes under the stencils in the Gargas cave, and show that only handshapes that are articulable in the air can be found among the existing stencils. In other words, handshape configurations that would have required using the cave wall as a support for the fingers are not attested. We argue that the stencils correspond to the type of handshape that one ordinarily finds in sign language phonology. More concretely, we claim that they correspond to signs of an 'alternate' or 'non-primary' sign language, like those still employed by a number of bimodal (speaking and signing) human groups in hunter-gatherer populations, like the Australian first nations or the Plains Indians. In those groups, signing is used for hunting and for a rich array of ritual purposes, including mourning and traditional story-telling. We discuss further evidence, based on typological generalizations about the phonology of non-primary sign languages and comparative ethnographic work, that points to such a parallelism. This evidence includes the fact that for some of those groups, stencil and petroglyph art has independently been linked to their sign language expressions. This article is part of the theme issue 'Reconstructing prehistoric languages'.
Collapse
|
36
|
Garcia B, Sallandre MA. Contribution of the Semiological Approach to Deixis-Anaphora in Sign Language: The Key Role of Eye-Gaze. Front Psychol 2020; 11:583763. [PMID: 33240174 PMCID: PMC7677344 DOI: 10.3389/fpsyg.2020.583763] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 09/10/2020] [Indexed: 11/13/2022] Open
Abstract
We address the issue of deixis–anaphora in sign language (SL) discourse, focusing on the role of eye-gaze. According to the Semiological Approach, SL structuring stems from a maximum exploitation of the visuo-gestural modality, which results in two modes of meaning production, depending on the signer’s semiotic intent. Involving both non-manual and manual parameters, the first mode, expressing the intent to say while showing, uses constructions based on structures, the termed “transfer structures.” The second one, expressing the intent to say without showing, involves lexical, pointing and fingerspelling units. In order to situate our descriptive concepts with respect to those used by SL linguists who, like us, adopt a cognitive–functionalist perspective, we expose a specific theoretical foundation of our approach, the “enunciation theories.” The concept of “enunciation” is decisive for understanding the role of eye-gaze, as being at the foundation of deixis and the key vector of referential creation and tracking in SL discourse. “Enunciation” entails the opposition between “Enunciation” and “Utterance” Domains. The first links, as co-enunciators, the signer/speaker and his/her addressee, establishing them by the very “act of enunciation” as 1st and 2nd person. The second is internal to the discourse produced. Grounding on corpora of narratives in several SLs (some with no historical link), we illustrate this crucial role of eye-gaze and the diversity of functions it fulfills. Our analyses, carried out in this perspective, attest to the multiple structural similarities between SLs, particularly with regard to transfer structures. This result strongly supports the typological hypothesis underlying our approach, namely, that these structures are common to all SLs. We thus show that an enunciative analysis, based on the key role of eye-gaze in these visual languages that are SLs, is able to give the simplest account of their own linguistic economy and, in particular, of deixis–anaphora in these languages.
Collapse
Affiliation(s)
- Brigitte Garcia
- Structures Formelles du Langage Laboratory, UMR 7023, Centre National de la Recherche Scientifique, University of Paris 8 - University Paris Lumières, Paris, France
| | - Marie-Anne Sallandre
- Structures Formelles du Langage Laboratory, UMR 7023, Centre National de la Recherche Scientifique, University of Paris 8 - University Paris Lumières, Paris, France
| |
Collapse
|
37
|
Padden CA. Review Essay on
Sign Language in Papua New Guinea
, by Adam Kendon. (John Benjamins, 2020). OCEANIA 2020. [DOI: 10.1002/ocea.5284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Abstract
Deaf anthropology is a field that exists in conversation with but is not reducible to the interdisciplinary field of deaf studies. Deaf anthropology is predicated upon a commitment to understanding deafnesses across time and space while holding on to “deaf” as a category that does something socially, politically, morally, and methodologically. In doing so, deaf anthropology moves beyond compartmentalizing the body, the senses, and disciplinary boundaries. We analyze the close relationship between anthropology writ large and deaf studies: Deaf studies scholars have found analytics and categories from anthropology, such as the concept of culture, to be productive in analyzing deaf peoples’ experiences and the sociocultural meanings of deafness. As we note, however, scholarship on deaf peoples’ experiences is increasingly variegated. This review is arranged into four overlapping sections titled Socialities and Similitudes; Mobilities, Spaces, and Networks; Modalities and the Sensorium; and Technologies and Futures.
Collapse
Affiliation(s)
- Michele Friedner
- Department of Comparative Human Development, University of Chicago, Chicago, Illinois 60637, USA
| | - Annelies Kusters
- Department of Languages and Intercultural Studies, School of Social Sciences, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, United Kingdom
| |
Collapse
|
39
|
Sparrow K, Lind C, van Steenbrugge W. Gesture, communication, and adult acquired hearing loss. JOURNAL OF COMMUNICATION DISORDERS 2020; 87:106030. [PMID: 32707420 DOI: 10.1016/j.jcomdis.2020.106030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 06/18/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Nonverbal communication, specifically hand and arm movements (commonly known as gesture), has long been recognized and explored as a significant element in human interaction as well as potential compensatory behavior for individuals with communication difficulties. The use of gesture as a compensatory communication method in expressive and receptive human communication disorders has been the subject of much investigation. Yet within the context of adult acquired hearing loss, gesture has received limited research attention and much remains unknown about patterns of nonverbal behaviors in conversations in which hearing loss is a factor. This paper presents key elements of the background of gesture studies and the theories of gesture function and production followed by a review of research focused on adults with hearing loss and the role of gesture and gaze in rehabilitation. The current examination of the visual resource of co-speech gesture in the context of everyday interactions involving adults with acquired hearing loss suggests the need for the development of an evidence base to effect enhancements and changes in the way in which rehabilitation services are conducted.
Collapse
Affiliation(s)
- Karen Sparrow
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Christopher Lind
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Willem van Steenbrugge
- Speech Pathology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| |
Collapse
|
40
|
Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behav Res Methods 2020; 52:51-67. [PMID: 30788798 PMCID: PMC7005091 DOI: 10.3758/s13428-019-01204-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An unprecedented number of empirical studies have shown that iconic gestures-those that mimic the sensorimotor attributes of a referent-contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture-meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture's mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
Collapse
|
41
|
Thorpe RK, Smith RJH. Future directions for screening and treatment in congenital hearing loss. PRECISION CLINICAL MEDICINE 2020; 3:175-186. [PMID: 33209510 PMCID: PMC7653508 DOI: 10.1093/pcmedi/pbaa025] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 07/06/2020] [Accepted: 07/12/2020] [Indexed: 02/06/2023] Open
Abstract
Hearing loss is the most common neurosensory deficit. It results from a variety of heritable and acquired causes and is linked to multiple deleterious effects on a child's development that can be ameliorated by prompt identification and individualized therapies. Diagnosing hearing loss in newborns is challenging, especially in mild or progressive cases, and its management requires a multidisciplinary team of healthcare providers comprising audiologists, pediatricians, otolaryngologists, and genetic counselors. While physiologic newborn hearing screening has resulted in earlier diagnosis of hearing loss than ever before, a growing body of knowledge supports the concurrent implementation of genetic and cytomegalovirus testing to offset the limitations inherent to a singular screening modality. In this review, we discuss the contemporary role of screening for hearing loss in newborns as well as future directions in its diagnosis and treatment.
Collapse
Affiliation(s)
- Ryan K Thorpe
- Molecular Otolaryngology and Renal Research Laboratories, Carver College of Medicine, University of Iowa, 375 Newton Rd, Iowa City, IA 52242, USA
| | - Richard J H Smith
- Molecular Otolaryngology and Renal Research Laboratories, Carver College of Medicine, University of Iowa, 375 Newton Rd, Iowa City, IA 52242, USA
| |
Collapse
|
42
|
Gimeno-Martínez M, Costa A, Baus C. Influence of Gesture and Linguistic Experience on Sign Perception. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:80-90. [PMID: 31504619 DOI: 10.1093/deafed/enz031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 05/29/2019] [Accepted: 06/25/2019] [Indexed: 06/10/2023]
Abstract
In the past years, there has been a significant increase in the number of people learning sign languages. For hearing second language (L2) signers, acquiring a sign language involves acquiring a new language in a different modality. Exploring how L2 sign perception is accomplished and how newly learned categories are created is the aim of the present study. In particular, we investigated handshape perception by means of two tasks, identification and discrimination. In two experiments, we compared groups of hearing L2 signers and groups with different knowledge of sign language. Experiment 1 explored three groups of children-hearing L2 signers, deaf signers, and hearing nonsigners. All groups obtained similar results in both identification and discrimination tasks regardless of sign language experience. In Experiment 2, two groups of adults-Catalan sign language learners (LSC) and nonsigners-perceived handshapes that could be permissible (either as a sign or as a gesture) or not. Both groups obtained similar results in both tasks and performed significantly different perceiving handshapes depending on their permissibility. The results obtained here suggest that sign language experience is not a determinant factor in handshape perception and support other hypotheses considering gesture experience.
Collapse
Affiliation(s)
| | - Albert Costa
- Center for Brain and Cognition (CBC), Universitat Pompeu Fabra
- Institució Catalana de Recerca i Estudis Avançats (ICREA)
| | - Cristina Baus
- Center for Brain and Cognition (CBC), Universitat Pompeu Fabra
| |
Collapse
|
43
|
Hilton M, Räling R, Wartenburger I, Elsner B. Parallels in Processing Boundary Cues in Speech and Action. Front Psychol 2019; 10:1566. [PMID: 31379649 PMCID: PMC6646704 DOI: 10.3389/fpsyg.2019.01566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 06/20/2019] [Indexed: 11/13/2022] Open
Abstract
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domain-general cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Collapse
Affiliation(s)
- Matt Hilton
- Department of Psychology, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Romy Räling
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Isabell Wartenburger
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Birgit Elsner
- Department of Psychology, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| |
Collapse
|
44
|
Toe D, Paatsch L, Szarkowski A. Assessing Pragmatic Skills Using Checklists with Children who are Deaf and Hard of Hearing: A Systematic Review. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:189-200. [PMID: 30929005 DOI: 10.1093/deafed/enz004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 01/28/2019] [Indexed: 06/09/2023]
Abstract
This paper investigates the use of checklists to assess pragmatics in children and adolescents who are deaf and hard of hearing. A systematic literature review was undertaken to identify all of the published research articles between 1979 and 2018 on the topic of the assessment of pragmatics for this population of children and adolescents. The 67 papers identified in this review were analyzed and all papers that utilized a checklist to assess pragmatic skills were identified. Across the 18 different published papers on the use of pragmatic skills among children who are deaf and hard of hearing, nine checklists were identified. These nine checklists were then compared and contrasted on six key features including identification of a theoretical framework or model; the type of pragmatic skills measured; the age range of the child assessed; the information/outputs generated; the primary informant for the assessment; and reliability, validity, and normative data. The resulting analysis provides a comprehensive guide to aid clinicians, educators, and researchers in selecting an appropriate checklist to assess pragmatic skills for children and adolescents who are deaf and hard of hearing.
Collapse
Affiliation(s)
| | | | - Amy Szarkowski
- Children's Center for Communication/Beverly School for the Deaf, Department of Psychiatry, Harvard Medical School
| |
Collapse
|
45
|
Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to signs. Cognition 2019; 191:103996. [PMID: 31238248 DOI: 10.1016/j.cognition.2019.06.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 06/04/2019] [Accepted: 06/06/2019] [Indexed: 11/20/2022]
Abstract
The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of 'manual cognates' that help non-signing adults to break into a new language at first exposure.
Collapse
|
46
|
Abstract
Contemporary semantics has uncovered a sophisticated typology of linguistic inferences, characterized by their conversational status and their behavior in complex sentences. This typology is usually thought to be specific to language and in part lexically encoded in the meanings of words. We argue that it is neither. Using a method involving "composite" utterances that include normal words alongside novel nonlinguistic iconic representations (gestures and animations), we observe successful "one-shot learning" of linguistic meanings, with four of the main inference types (implicatures, presuppositions, supplements, homogeneity) replicated with gestures and animations. The results suggest a deeper cognitive source for the inferential typology than usually thought: Domain-general cognitive algorithms productively divide both linguistic and nonlinguistic information along familiar parts of the linguistic typology.
Collapse
Affiliation(s)
- Lyn Tieu
- Office of the Pro Vice-Chancellor (Research and Innovation), Western Sydney University, Penrith NSW 2751, Australia;
- School of Education, Western Sydney University, Penrith NSW 2751, Australia
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith NSW 2751, Australia
- Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders, Australian Hearing Hub, Macquarie University, Sydney NSW 2109, Australia
| | - Philippe Schlenker
- Département d'Etudes Cognitives, Ecole Normale Supérieure (ENS), Université Paris Sciences et Lettres (PSL), Ecole des Hautes Etudes en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique (CNRS), 75005 Paris, France
- Institut Jean-Nicod, CNRS, 75005 Paris, France
- Department of Linguistics, New York University, New York, NY 10003
| | - Emmanuel Chemla
- Département d'Etudes Cognitives, Ecole Normale Supérieure (ENS), Université Paris Sciences et Lettres (PSL), Ecole des Hautes Etudes en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique (CNRS), 75005 Paris, France
- Laboratoire de Sciences Cognitives et Psycholinguistique, CNRS, 75005 Paris, France
| |
Collapse
|
47
|
Zdrazilova L, Sidhu DM, Pexman PM. Communicating abstract meaning: concepts revealed in words and gestures. Philos Trans R Soc Lond B Biol Sci 2019; 373:rstb.2017.0138. [PMID: 29915006 DOI: 10.1098/rstb.2017.0138] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2017] [Indexed: 11/12/2022] Open
Abstract
words refer to concepts that cannot be directly experienced through our senses (e.g. truth, morality). How we ground the meanings of abstract words is one of the deepest problems in cognitive science today. We investigated this question in an experiment in which 62 participants were asked to communicate the meanings of words (20 abstract nouns, e.g. impulse; 10 concrete nouns, e.g. insect) to a partner without using the words themselves (the taboo task). We analysed the speech and associated gestures that participants used to communicate the meaning of each word in the taboo task. Analysis of verbal and gestural data yielded a number of insights. When communicating about the meanings of abstract words, participants' speech referenced more people and introspections. In contrast, the meanings of concrete words were communicated by referencing more objects and entities. Gesture results showed that when participants spoke about abstract word meanings their speech was accompanied by more metaphorical and beat gestures, and speech about concrete word meanings was accompanied by more iconic gestures. Taken together, the results suggest that abstract meanings are best captured by a model that allows dynamic access to multiple representation systems.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Collapse
Affiliation(s)
- Lenka Zdrazilova
- Department of Psychology, Faculty of Arts, University of Calgary, Calgary, Alberta, Canada T2N 1N4
| | - David M Sidhu
- Department of Psychology, Faculty of Arts, University of Calgary, Calgary, Alberta, Canada T2N 1N4
| | - Penny M Pexman
- Department of Psychology, Faculty of Arts, University of Calgary, Calgary, Alberta, Canada T2N 1N4
| |
Collapse
|
48
|
Quer J, Steinbach M. Handling Sign Language Data: The Impact of Modality. Front Psychol 2019; 10:483. [PMID: 30914998 PMCID: PMC6423168 DOI: 10.3389/fpsyg.2019.00483] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 02/19/2019] [Indexed: 11/13/2022] Open
Abstract
Natural languages come in two different modalities. The impact of modality on the grammatical structure and linguistic theory has been discussed at great length in the last 20 years. By contrast, the impact of modality on linguistic data elicitation and collection, corpus studies, and experimental (psycholinguistic) studies is still underinvestigated. In this article, we address specific challenges that arise in judgment data elicitation and experimental studies of sign languages. These challenges are related to the socio-linguistic status of the Deaf community and the larger variability across signers within the same community, to the social status of sign languages, to properties of the visual-gestural modality and its interface with gesture, to methodological aspects of handling sign language data, and to specific linguistic features of sign languages. While some of these challenges also pertain to (some varieties of) spoken languages, other challenges are more modality-specific. The special combination of the challenges discussed in this article seems to be a specific facet empirical research on sign languages is faced with. In addition, we discuss the complementarity of theoretical approaches and experimental studies and show how the interaction of both approaches contributes to a better understanding of sign languages in particular and linguistic structures in general.
Collapse
Affiliation(s)
- Josep Quer
- ICREA-Pompeu Fabra University, Barcelona, Spain
| | | |
Collapse
|
49
|
Mayberry RI, Kluender R. Rethinking the critical period for language: New insights into an old question from American Sign Language. BILINGUALISM (CAMBRIDGE, ENGLAND) 2018; 21:938-944. [PMID: 31662701 PMCID: PMC6818964 DOI: 10.1017/s1366728918000585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We thank the commentators for their thoughtful critiques, which we found both insightful and stimulating to our own thinking. Our first response is that, while debates about the CPL in theoretical contexts are important, the vigor and intensity of these debates should not overshadow the fact that the main goal of our article was to highlight a finding of vital importance: Sufficient language input in early childhood matters deeply because it has long-term consequences (Lillo-Martin, 2018). Woll sums up this point both succinctly and poignantly in her report of a similar case of very late L1 exposure in adulthood who had decades of experience: “For a [deaf] child who, even in the context of early intervention, does not acquire a spoken language, the danger is that they will never have native-like mastery of any L1.” This is what truly matters. Our hope is that our keynote article and the accompanying commentaries might have a positive effect on clinical practice, educational policy, and even parental choice in this regard. In what follows, we discuss the main issues arising from the commentaries. First we note the points of agreement followed by a clarification of what we did not claim in our article. Researchers continue to debate what the shape of the AoA function looks like and its theoretical implications, which we address third. We then address the issues raised as to whether late L1 acquisition and late L2 learning differ in degree or kind, and last we discuss what we mean when we say that language acquisition during post-natal brain growth creates the capacity to learn language.
Collapse
Affiliation(s)
| | - Robert Kluender
- Department of Linguistics, University of California San Diego
| |
Collapse
|
50
|
Mayberry RI, Kluender R. Rethinking the critical period for language: New insights into an old question from American Sign Language. BILINGUALISM (CAMBRIDGE, ENGLAND) 2018; 21:886-905. [PMID: 30643489 PMCID: PMC6329394 DOI: 10.1017/s1366728917000724] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The hypothesis that children surpass adults in long-term second-language proficiency is accepted as evidence for a critical period for language. However, the scope and nature of a critical period for language has been the subject of considerable debate. The controversy centers on whether the age-related decline in ultimate second-language proficiency is evidence for a critical period or something else. Here we argue that age-onset effects for first vs. second language outcome are largely different. We show this by examining psycholinguistic studies of ultimate attainment in L2 vs. L1 learners, longitudinal studies of adolescent L1 acquisition, and neurolinguistic studies of late L2 and L1 learners. This research indicates that L1 acquisition arises from post-natal brain development interacting with environmental linguistic experience. By contrast, L2 learning after early childhood is scaffolded by prior childhood L1 acquisition, both linguistically and neurally, making it a less clear test of the critical period for language.
Collapse
Affiliation(s)
| | - Robert Kluender
- Department of Linguistics, University of California San Diego
| |
Collapse
|