1
|
Baugerud GA, Johnson MS, Dianiska R, Røed RK, Powell MB, Lamb ME, Hassan SZ, Sabet SS, Hicks S, Salehi P, Riegler MA, Halvorsen P, Quas J. Using an AI-based avatar for interviewer training at Children's Advocacy Centers: Proof of Concept. CHILD MALTREATMENT 2024:10775595241263017. [PMID: 38889731 DOI: 10.1177/10775595241263017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2024]
Abstract
This proof-of- concept study focused on interviewers' behaviors and perceptions when interacting with a dynamic AI child avatar alleging abuse. Professionals (N = 68) took part in a virtual reality (VR) study in which they questioned an avatar presented as a child victim of sexual or physical abuse. Of interest was how interviewers questioned the avatar, how productive the child avatar was in response, and how interviewers perceived the VR interaction. Findings suggested alignment between interviewers' virtual questioning approaches and interviewers' typical questioning behavior in real-world investigative interviews, with a diverse range of questions used to elicit disclosures from the child avatar. The avatar responded to most question types as children typically do, though more nuanced programming of the avatar's productivity in response to complex question types is needed. Participants rated the avatar positively and felt comfortable with the VR experience. Results underscored the potential of AI-based interview training as a scalable, standardized alternative to traditional methods.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Saaed S Sabet
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Steven Hicks
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Pegah Salehi
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Pål Halvorsen
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Jodi Quas
- University of California Irvine, Irvine, CA, USA
| |
Collapse
|
2
|
Titus A, Peeters D. Multilingualism at the Market: A Pre-registered Immersive Virtual Reality Study of Bilingual Language Switching. J Cogn 2024; 7:35. [PMID: 38638461 PMCID: PMC11025569 DOI: 10.5334/joc.359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 03/26/2024] [Indexed: 04/20/2024] Open
Abstract
Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals' capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory.
Collapse
Affiliation(s)
- Alex Titus
- Radboud University, Centre for Language Studies, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, the Netherlands
| |
Collapse
|
3
|
Nirme J, Gulz A, Haake M, Gullberg M. Early or synchronized gestures facilitate speech recall-a study based on motion capture data. Front Psychol 2024; 15:1345906. [PMID: 38596333 PMCID: PMC11002957 DOI: 10.3389/fpsyg.2024.1345906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/07/2024] [Indexed: 04/11/2024] Open
Abstract
Introduction Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with. Methods To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall. Results Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall. Discussion These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.
Collapse
Affiliation(s)
- Jens Nirme
- Lund University Cognitive Science, Lund, Sweden
| | - Agneta Gulz
- Lund University Cognitive Science, Lund, Sweden
| | | | - Marianne Gullberg
- Centre for Languages and Literature and Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
4
|
Titus A, Dijkstra T, Willems RM, Peeters D. Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia 2024; 193:108764. [PMID: 38141963 DOI: 10.1016/j.neuropsychologia.2023.108764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 10/20/2023] [Accepted: 12/16/2023] [Indexed: 12/25/2023]
Abstract
Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
Collapse
Affiliation(s)
- Alex Titus
- Radboud University, Centre for Language Studies, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | - Ton Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, the Netherlands
| | - Roel M Willems
- Radboud University, Centre for Language Studies, Nijmegen, the Netherlands
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, the Netherlands
| |
Collapse
|
5
|
Nota N, Trujillo JP, Jacobs V, Holler J. Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Sci Rep 2023; 13:21295. [PMID: 38042876 PMCID: PMC10693605 DOI: 10.1038/s41598-023-48586-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 11/28/2023] [Indexed: 12/04/2023] Open
Abstract
In conversation, recognizing social actions (similar to 'speech acts') early is important to quickly understand the speaker's intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Vere Jacobs
- Faculty of Arts, Radboud University, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Raghavan R, Raviv L, Peeters D. What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition 2023; 240:105581. [PMID: 37573692 DOI: 10.1016/j.cognition.2023.105581] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/03/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
Collapse
Affiliation(s)
- Renuka Raghavan
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands
| | - Limor Raviv
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Social, Cognitive and Affective Neuroscience (cSCAN), University of Glasgow, United Kingdom
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, The Netherlands.
| |
Collapse
|
7
|
Park M, Suk HJ. The characteristics of facial emotions expressed in Memojis. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
8
|
RNN Language Processing Model-Driven Spoken Dialogue System Modeling Method. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6993515. [PMID: 35256880 PMCID: PMC8898104 DOI: 10.1155/2022/6993515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/20/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022]
Abstract
Speech recognition and semantic understanding of spoken language are critical components in determining the dialogue system's performance in SDS. In the study of SDS, the improvement of SLU performance is critical. By influencing the factors before and after the input text sequence information, RNN predicts the next text information. The RNN language model's probability score is introduced, and the recognition's intermediate result is rescored. A method of combining cache RNN models to optimize the decoding process and improve the accuracy of word sequence probability calculation of language model on test data is proposed to address the problem of mismatch between test data and training data in recognition. The results of the experiments show that the method proposed in this paper can effectively improve the recognition system's performance on the test set. It has the potential to achieve a higher SLU score. It is useful for future research on spoken dialogue and SLU issues.
Collapse
|
9
|
Abstract
Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. Here, we focus on a central tool for social interaction: verbal communication. We assess the extent to which humans co-represent (simulate and predict) a robot's verbal actions. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics). Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. Here, with the robot, the partner-elicited inhibitory effects were not observed. Instead, naming was facilitated, as revealed by faster naming of word categories co-named with the robot. This facilitation suggests that robots, unlike humans, are not simulated down to the level of lexical selection. Instead, a robot's speaking appears to be simulated at the initial level of language production where the meaning of the verbal message is generated, resulting in facilitated language production due to conceptual priming. We conclude that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking.
Collapse
|
10
|
Methodological and institutional considerations for the use of 360-degree video and pet animals in human subject research: An experimental case study from the United States. Behav Res Methods 2021; 53:977-992. [PMID: 32918168 DOI: 10.3758/s13428-020-01458-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Head-mounted virtual-reality headsets and virtual-reality content have experienced large technological advances and rapid proliferation over the last years. These immersive technologies bear great potential for the facilitation of the study of human decision-making and behavior in safe, perceptually realistic virtual environments. Best practices and guidelines for the effective and efficient use of 360-degree video in experimental research is also evolving. In this paper, we summarize our research group's experiences with a sizable experimental case study on virtual-reality technology, 360-degree video, pet animals, and human participants. Specifically, we discuss the institutional, methodological, and technological challenges encountered during the implementation of our 18-month-long research project on human emotional response to short-duration 360-degree videos of human-pet interactions. Our objective in this paper is to contribute to the growing body of research on 360-degree video and to lower barriers related to the conceptualization and practice of research at the intersection of virtual-reality experiences, 360-degree video, live animals, and human behavior. Practical suggestions for human-subject researchers interested in utilizing virtual-reality technology, 360-degree videos, and pet animals as a part of their research are discussed.
Collapse
|
11
|
Brucker-Kley E, Kleinberger U, Keller T, Christen J, Keller-Senn A, Koppitz A. Identifying research gaps: A review of virtual patient education and self-management. Technol Health Care 2021; 29:1057-1069. [PMID: 33998564 DOI: 10.3233/thc-202665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Avatars in Virtual Reality (VR) can not only represent humans, but also embody intelligent software agents that communicate with humans, thus enabling a new paradigm of human-machine interaction. OBJECTIVE The research agenda proposed in this paper by an interdisciplinary team is motivated by the premise that a conversation with a smart agent avatar in VR means more than giving a face and body to a chatbot. Using the concrete communication task of patient education, this research agenda is rather intended to explore which patterns and practices must be constructed visually, verbally, para- and nonverbally between humans and embodied machines in a counselling context so that humans can integrate counselling by an embodied VR smart agent into their thinking and acting in one way or another. METHODS The scientific literature in different bibliographical databases was reviewed. A qualitative narrative approach was applied for analysis. RESULTS A research agenda is proposed which investigates how recurring consultations of patients with healthcare professionals are currently conducted and how they could be conducted with an embodied smart agent in immersive VR. CONCLUSIONS Interdisciplinary teams consisting of linguists, computer scientists, visual designers and health care professionals are required which need to go beyond a technology-centric solution design approach. Linguists' insights from discourse analysis drive the explorative experiments to identify test and discover what capabilities and attributes the smart agent in VR must have, in order to communicate effectively with a human being.
Collapse
Affiliation(s)
| | | | - Thomas Keller
- ZHAW Zurich University of Applied Sciences, Winterthur, Switzerland
| | | | | | - Andrea Koppitz
- University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland
| |
Collapse
|
12
|
Vaezipour A, Aldridge D, Koenig S, Theodoros D, Russell T. "It's really exciting to think where it could go": a mixed-method investigation of clinician acceptance, barriers and enablers of virtual reality technology in communication rehabilitation. Disabil Rehabil 2021; 44:3946-3958. [PMID: 33715566 DOI: 10.1080/09638288.2021.1895333] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
PURPOSE Acquired communication disorders can result in significant barriers to everyday life activities, and commonly require long-term rehabilitation. This research aimed to investigate usability, acceptance, barriers and enablers to the use of immersive virtual reality (VR) technology for communication rehabilitation from the perspective of speech-language pathologists (SLPs). METHODS Semi-structured interviews and surveys (system usability and motion sickness) were carried out with 15 SLPs following their participation in communication activities typical of daily life, experienced within an immersive VR kitchen environment. RESULTS The system usability scores were average. In addition, motion sickness symptoms were low after interaction with the VR system. The main findings from semi-structured interviews are discussed across five main themes: (i) attitude towards the use of VR in communication rehabilitation (ii) perceived usefulness of VR (iii) perceived ease of use of VR (iv) intention to use VR, and (v) clinical adoption barriers and enablers. CONCLUSIONS Overall, participants were positive about VR and its potential applications to communication rehabilitation. This study provides a foundation to inform the design, development, and implementation of a VR system to be used in the rehabilitation of individuals with acquired communication disorders.IMPLICATIONS FOR REHABILITATIONVirtual Reality applications could simulate social communication situations within the clinic.VR could be used as a rehabilitation tool for communication assessment and/or outcome measure.VR requires customisation to the specific communication rehabilitation needs of the client.Participants identified barriers and enablers to adoption of VR by speech-language pathologists.
Collapse
Affiliation(s)
- Atiyeh Vaezipour
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, Australia
| | - Danielle Aldridge
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, Australia
| | | | - Deborah Theodoros
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, Australia
| | - Trevor Russell
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, Australia
| |
Collapse
|
13
|
Welches Potenzial haben virtuelle Realitäten in der klinischen und forensischen Psychiatrie? Ein Überblick über aktuelle Verfahren und Einsatzmöglichkeiten. FORENSISCHE PSYCHIATRIE, PSYCHOLOGIE, KRIMINOLOGIE 2020. [DOI: 10.1007/s11757-020-00611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
ZusammenfassungVirtuelle Realitäten (VR) werden in der Diagnose und Behandlung von Patienten im klinischen Feld bereits seit 20 Jahren erfolgreich eingesetzt und weiterentwickelt. Seit etwas mehr als 5 Jahren gibt es nun auch erste Beispiele über die Anwendung von VR in psychiatrisch-forensischen Kontexten. Für die forensische Psychiatrie ist die Möglichkeit, realistische, sichere und kontrollierbare Diagnostik- und Lernumgebungen zu schaffen, der ausschlaggebende Vorteil der VR-Technologie. So können z. B. Straftäter in Szenarien behandelt oder begutachtet werden, welche im echten Leben risikoreich, unethisch oder ökologisch invalide wären. In diesem Artikel werden unterschiedliche aktuelle Studienbeispiele zu klinischer Behandlung und Diagnose von Patienten sowie der forensischen Prognose und Therapie von Straftätern vorgestellt. Damit zeigt der Überblick, dass VR mittlerweile auch in der forensischen Psychiatrie ein vielversprechendes Werkzeug sein kann, welches bereits etablierte Instrumente ergänzen oder erweitern kann. Auch in der Ausbildung von forensisch-psychiatrischem Fachpersonal können VR-Anwendungen eine Hilfe sein. Hier gibt es bereits erste vielversprechende Einsätze durch das Training mithilfe von virtuellen Patienten, jedoch benötigt es noch umfangreiche Forschungsarbeit auf diesem Feld, um sie im professionellen Alltag einsetzen zu können. Vor dem Einsatz von VR-Anwendungen sollten sich Forscher und Praktiker neben den Vorteilen auch mit den Nachteilen von VR auseinandersetzen und ein besonderes Augenmerk auf die ethischen Richtlinien werfen, welche in den letzten Jahren dazu erarbeitet wurden. Die stetige Weiterentwicklung und der immer breitere Einsatz von VR im klinischen und forensisch-psychiatrischen Feld zeigen, dass VR auch hier das Potenzial hat, ein etabliertes Forschungs- sowie Therapieinstrument zu werden.
Collapse
|
14
|
Cognitive and Neuroanatomic Accounts of Referential Communication in Focal Dementia. eNeuro 2019; 6:ENEURO.0488-18.2019. [PMID: 31451606 PMCID: PMC6794081 DOI: 10.1523/eneuro.0488-18.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 05/10/2019] [Accepted: 06/06/2019] [Indexed: 12/14/2022] Open
Abstract
The primary function of language is to communicate—that is, to make individuals reach a state of mutual understanding about a particular thought or idea. Accordingly, daily communication is truly a task of social coordination. Indeed, successful interactions require individuals to (1) track and adopt a partner’s perspective and (2) continuously shift between the numerous elements relevant to the exchange. Here, we use a referential communication task to study the contributions of perspective taking and executive function to effective communication in nonaphasic human patients with behavioral variant frontotemporal dementia (bvFTD). Similar to previous work, the task was to identify a target object, embedded among an array of competitors, for an interlocutor. Results indicate that bvFTD patients are impaired relative to control subjects in selecting the optimal, precise response. Neuropsychological testing related this performance to mental set shifting, but not to working memory or inhibition. Follow-up analyses indicated that some bvFTD patients perform equally well as control subjects, while a second, clinically matched patient group performs significantly worse. Importantly, the neuropsychological profiles of these subgroups differed only in set shifting. Finally, structural MRI analyses related patient impairment to gray matter disease in orbitofrontal, medial prefrontal, and dorsolateral prefrontal cortex, all regions previously implicated in social cognition and overlapping those related to set shifting. Complementary white matter analyses implicated uncinate fasciculus, which carries projections between orbitofrontal and temporal cortices. Together, these findings demonstrate that impaired referential communication in bvFTD is cognitively related to set shifting, and anatomically related to a social-executive network including prefrontal cortices and uncinate fasciculus.
Collapse
|
15
|
Heyselaar E, Segaert K. Memory encoding of syntactic information involves domain-general attentional resources: Evidence from dual-task studies. Q J Exp Psychol (Hove) 2019; 72:1285-1296. [DOI: 10.1177/1747021818801249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We investigate the type of attention (domain-general or language-specific) used during syntactic processing. We focus on syntactic priming: In this task, participants listen to a sentence that describes a picture (prime sentence), followed by a picture the participants need to describe (target sentence). We measure the proportion of times participants use the syntactic structure they heard in the prime sentence to describe the current target sentence as a measure of syntactic processing. Participants simultaneously conducted a motion-object tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We manipulated the number of objects the participant had to track; we thus measured participants’ ability to process syntax while their attention is not taxed, slightly taxed, or overly taxed. Performance in the MOT task was significantly worse when conducted as a dual task compared with as a single task. We observed an inverted U-shaped curve on priming magnitude when conducting the MOT task concurrently with prime sentences (i.e., memory encoding), but no effect when conducted with target sentences (i.e., memory retrieval). Our results illustrate how, during the encoding of syntactic information, domain-general attention differentially affects syntactic processing, whereas during the retrieval of syntactic information, domain-general attention does not influence syntactic processing.
Collapse
Affiliation(s)
- Evelien Heyselaar
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Katrien Segaert
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
16
|
Abstract
This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behavior, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g., speech) in isolation.
Collapse
Affiliation(s)
- David Peeters
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, NL-5000 LE, Tilburg, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
17
|
Abstract
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Collapse
|
18
|
Abstract
When we comprehend language, we often do this in rich settings where we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and nonlinguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and virtual reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant while wearing EEG equipment. In the restaurant, participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g., a plate with salmon). The restaurant guest would then produce a sentence (e.g., “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
Collapse
|
19
|
Assessing priming for prosodic representations: Speaking rate, intonational phrase boundaries, and pitch accenting. Mem Cognit 2019; 46:625-641. [PMID: 29349696 DOI: 10.3758/s13421-018-0789-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent work in the literature on prosody presents a puzzle: Some aspects of prosody can be primed in production (e.g., speech rate), but others cannot (e.g., intonational phrase boundaries, or IPBs). In three experiments we aimed to replicate these effects and identify the source of this dissociation. In Experiment 1 we investigated how speaking rate and the presence of an intonational boundary in a prime sentence presented auditorily affect the production of these aspects of prosody in a target sentence presented visually. Analyses of the targets revealed that participants' speaking rates, but not their production of boundaries, were affected by the priming manipulation. Experiment 2 verified whether speakers are more sensitive to IPBs when the boundaries provide disambiguating information, and in this different context replicated Experiment 1 in showing no IPB priming. Experiment 3 tested whether speakers are sensitive to another aspect of prosody-pitch accenting-in a similar paradigm. Again, we found no evidence that this manipulation affected pitch accenting in target sentences. These findings are consistent with earlier research and suggest that aspects of prosody that are paralinguistic (like speaking rate) may be more amenable to priming than are linguistic aspects of prosody (such as phrase boundaries and pitch accenting).
Collapse
|
20
|
Eom H, Kim KK, Lee S, Hong YJ, Heo J, Kim JJ, Kim E. Development of Virtual Reality Continuous Performance Test Utilizing Social Cues for Children and Adolescents with Attention-Deficit/Hyperactivity Disorder. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2019; 22:198-204. [PMID: 30672714 DOI: 10.1089/cyber.2018.0377] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Virtual reality (VR) neuropsychological assessments have the potential for the ecological measurement of attention. We analyzed the newly developed VR continuous performance test (VR-CPT) for Korean children with attention-deficit/hyperactivity disorder (ADHD) and typically developing children (TDC). To identify specific features of a virtual environment that influence the attention performance of children, we investigated whether the presence of a virtual teacher and social cues in the VR environment affects their attention performance. A total of 38 participants (18 TDC and 20 ADHD children and adolescents) were recruited for VR-CPT testing. Bivariate correlational analysis was conducted to examine the associations between the results of the VR-CPT and ADHD questionnaires to determine the capacity of VR-CPT to mirror real-life attention behaviors. Mixed-design analysis of variables was conducted to compare the effects of the social aspects of the VR-CPT on attention performance in groups. There were significant associations between ADHD rating scores and the omission error, commission error, reaction time (RT), reaction time variability (RTV), and total accuracy of the VR-CPT in the ADHD group. In addition, the ADHD group exhibited comparable performance with the TDC group for all measures of the VR-CPT. Also there seemed to be a trend of decreasing RTV when a virtual teacher with social cues was present compared with the equipment control condition in the ADHD group. Performance in the VR-CPT program was associated with behavioral measures of ADHD symptoms. Adding social aspects to a VR environment commonly encountered by children and adolescents has the potential to make a difference in the attention performance of youths with ADHD.
Collapse
Affiliation(s)
- Hyojung Eom
- 1 Brain Korea 21 PLUS Project for Medical Science, Yonsei University, Seoul, Republic of Korea.,2 Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kwanguk Kenny Kim
- 3 Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Sungmi Lee
- 2 Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yeon-Ju Hong
- 2 Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jiwoong Heo
- 3 Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Jae-Jin Kim
- 1 Brain Korea 21 PLUS Project for Medical Science, Yonsei University, Seoul, Republic of Korea.,2 Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea.,4 Department of Psychiatry, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Eunjoo Kim
- 2 Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea.,4 Department of Psychiatry, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
21
|
Heyselaar E, Hagoort P, Segaert K. How social opinion influences syntactic processing-An investigation using virtual reality. PLoS One 2017; 12:e0174405. [PMID: 28384163 PMCID: PMC5383374 DOI: 10.1371/journal.pone.0174405] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 03/08/2017] [Indexed: 11/23/2022] Open
Abstract
The extent to which you adapt your grammatical choices to match that of your interlocutor’s (structural priming) can be influenced by the social opinion you have of your interlocutor. However, the direction and reliability of this effect is unclear as different studies have reported seemingly contradictory results. We have operationalized social perception as the ratings of strangeness for different avatars in a virtual reality study. The use of avatars ensured maximal control over the interlocutor’s behaviour and a clear dimension along which to manipulate social perceptions toward this interlocutor. Our results suggest an inverted U-shaped curve in structural priming magnitude for passives as a function of strangeness: the participants showed the largest priming effects for the intermediately strange, with a decrease when interacting with the least- or most-strange avatars. The relationship between social perception and priming magnitude may thus be non-linear. There seems to be a 'happy medium' in strangeness, evoking the largest priming effect. We did not find a significant interaction of priming magnitude with any social perception.
Collapse
Affiliation(s)
- Evelien Heyselaar
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- * E-mail:
| | - Peter Hagoort
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Katrien Segaert
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
22
|
Stevens CJ, Pinchbeck B, Lewis T, Luerssen M, Pfitzner D, Powers DMW, Abrahamyan A, Leung Y, Gibert G. Mimicry and expressiveness of an ECA in human-agent interaction: familiarity breeds content! ACTA ACUST UNITED AC 2016; 2:1. [PMID: 27980890 PMCID: PMC5125404 DOI: 10.1186/s40469-016-0008-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 05/29/2016] [Indexed: 11/18/2022]
Abstract
Background Two experiments investigated the effect of features of human behaviour on the quality of interaction with an Embodied Conversational Agent (ECA). Methods In Experiment 1, visual prominence cues (head nod, eyebrow raise) of the ECA were manipulated to explore the hypothesis that likeability of an ECA increases as a function of interpersonal mimicry. In the context of an error detection task, the ECA either mimicked or did not mimic a head nod or brow raise that humans produced to give emphasis to a word when correcting the ECA’s vocabulary. In Experiment 2, presence versus absence of facial expressions on comprehension accuracy of two computer-driven ECA monologues was investigated. Results In Experiment 1, evidence for a positive relationship between ECA mimicry and lifelikeness was obtained. However, a mimicking agent did not elicit more human gestures. In Experiment 2, expressiveness was associated with greater comprehension and higher ratings of humour and engagement. Conclusion Influences from mimicry can be explained by visual and motor simulation, and bidirectional links between similarity and liking. Cue redundancy and minimizing cognitive load are potential explanations for expressiveness aiding comprehension. Electronic supplementary material The online version of this article (doi:10.1186/s40469-016-0008-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Catherine J Stevens
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751 Australia ; School of Social Sciences & Psychology, Western Sydney University, Penrith, Australia
| | - Bronwyn Pinchbeck
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751 Australia ; School of Social Sciences & Psychology, Western Sydney University, Penrith, Australia
| | - Trent Lewis
- Informatics and Engineering, Flinders University, Adelaide, Australia
| | - Martin Luerssen
- Informatics and Engineering, Flinders University, Adelaide, Australia
| | - Darius Pfitzner
- School of Business, Charles Darwin University, Darwin, Australia
| | - David M W Powers
- Informatics and Engineering, Flinders University, Adelaide, Australia
| | - Arman Abrahamyan
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751 Australia ; Psychology Department, Neurosciences Institute, Stanford University, Stanford, USA
| | - Yvonne Leung
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751 Australia
| | - Guillaume Gibert
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751 Australia ; INSERM, U846, 18 avenue Doyen Lépine, 69500 Bron, France ; Université de Lyon, Université Lyon 1, 69003 Lyon, France
| |
Collapse
|
23
|
Schoot L, Heyselaar E, Hagoort P, Segaert K. Does Syntactic Alignment Effectively Influence How Speakers Are Perceived by Their Conversation Partner? PLoS One 2016; 11:e0153521. [PMID: 27081856 PMCID: PMC4833301 DOI: 10.1371/journal.pone.0153521] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Accepted: 03/30/2016] [Indexed: 11/19/2022] Open
Abstract
The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.
Collapse
Affiliation(s)
- Lotte Schoot
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- * E-mail:
| | - Evelien Heyselaar
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Katrien Segaert
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|