1
|
Dwivedi VD, Selvanayagam J. An electrophysiological investigation of referential communication. BRAIN AND LANGUAGE 2024; 254:105438. [PMID: 38943944 DOI: 10.1016/j.bandl.2024.105438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 06/13/2024] [Accepted: 06/19/2024] [Indexed: 07/01/2024]
Abstract
A key aspect of linguistic communication involves semantic reference to objects. Presently, we investigate neural responses at objects when reference is disrupted, e.g., "The connoisseur tasted *that wine"… vs. "…*that roof…" Without any previous linguistic context or visual gesture, use of the demonstrative determiner "that" renders interpretation at the noun as incoherent. This incoherence is not based on knowledge of how the world plausibly works but instead is based on grammatical rules of reference. Whereas Event-Related Potential (ERP) responses to sentences such as "The connoisseur tasted the wine …" vs. "the roof" would result in an N400 effect, it is unclear what to expect for doubly incoherent "…*that roof…". Results revealed an N400 effect, as expected, preceded by a P200 component (instead of predicted P600 effect). These independent ERP components at the doubly violated condition support the notion that semantic interpretation can be partitioned into grammatical vs. contextual constructs.
Collapse
Affiliation(s)
- Veena D Dwivedi
- Department of Psychology/Centre for Neuroscience, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON L2S 3A1, Canada.
| | - Janahan Selvanayagam
- Centre for Neuroscience, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON L2S 3A, Canada
| |
Collapse
|
2
|
Antoine S, Grisoni L, Tomasello R, Pulvermüller F. The prediction potential indexes the meaning and communicative function of upcoming utterances. Cortex 2024; 177:346-362. [PMID: 38917725 DOI: 10.1016/j.cortex.2024.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 03/15/2024] [Accepted: 05/03/2024] [Indexed: 06/27/2024]
Abstract
Prediction has a fundamental role in language processing. However, predictions can be made at different levels, and it is not always clear whether speech sounds, morphemes, words, meanings, or communicative functions are anticipated during dialogues. Previous studies reported specific brain signatures of communicative pragmatic function, in particular enhanced brain responses immediately after encountering an utterance used to request an object from a partner, but relatively smaller ones when the same utterance was used for naming the object. The present experiment now investigates whether similar neuropragmatic signatures emerge in recipients before the onset of upcoming utterances carrying different predictable communicative functions. Trials started with a context question and object pictures displayed on the screen, raising the participant's expectation that words from a specific semantic category (food or tool) would subsequently be used to either name or request one of the objects. Already 600 msec before utterance onset, a larger prediction potential was observed when a request was anticipated relative to naming expectation. As this result is congruent with the neurophysiological difference previously observed right after the critical utterance, the anticipatory brain activity may index predictions about the social-communicative function of upcoming utterances. In addition, we also found that the predictable semantic category of the upcoming word was likewise reflected in the anticipatory brain potential. Thus, the neurophysiological characteristics of the prediction potential can capture different types of upcoming linguistic information, including semantic and pragmatic aspects of an upcoming utterance and communicative action.
Collapse
Affiliation(s)
- Salomé Antoine
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany
| | - Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany; Einstein Center for Neurosciences, Berlin, Germany.
| |
Collapse
|
3
|
Petersen EB. Investigating conversational dynamics in triads: Effects of noise, hearing impairment, and hearing aids. Front Psychol 2024; 15:1289637. [PMID: 38680286 PMCID: PMC11048959 DOI: 10.3389/fpsyg.2024.1289637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 03/04/2024] [Indexed: 05/01/2024] Open
Abstract
Communication is an important part of everyday life and requires a rapid and coordinated interplay between interlocutors to ensure a successful conversation. Here, we investigate whether increased communication difficulty caused by additional background noise, hearing impairment, and not providing adequate hearing-aid (HA) processing affected the dynamics of a group conversation between one hearing-impaired (HI) and two normal-hearing (NH) interlocutors. Free conversations were recorded from 25 triads communicating at low (50 dBC SPL) or high (75 dBC SPL) levels of canteen noise. In conversations at low noise levels, the HI interlocutor was either unaided or aided. In conversations at high noise levels, the HI interlocutor either experienced omnidirectional or directional sound processing. Results showed that HI interlocutors generally spoke more and initiated their turn faster, but with more variability, than the NH interlocutors. Increasing the noise level resulted in generally higher speech levels, but more so for the NH than for the HI interlocutors. Higher background noise also affected the HI interlocutors' ability to speak in longer turns. When the HI interlocutors were unaided at low noise levels, both HI and NH interlocutors spoke louder, while receiving directional sound processing at high levels of noise only reduced the speech level of the HI interlocutor. In conclusion, noise, hearing impairment, and hearing-aid processing mainly affected speech levels, while the remaining measures of conversational dynamics (FTO median, FTO IQR, turn duration, and speaking time) were unaffected. Hence, although experiencing large changes in communication difficulty, the conversational dynamics of the free triadic conversations remain relatively stable.
Collapse
|
4
|
Jain S, Vo VA, Wehbe L, Huth AG. Computational Language Modeling and the Promise of In Silico Experimentation. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:80-106. [PMID: 38645624 PMCID: PMC11025654 DOI: 10.1162/nol_a_00101] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 01/18/2023] [Indexed: 04/23/2024]
Abstract
Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.
Collapse
Affiliation(s)
- Shailee Jain
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Labs, Hillsboro, OR, USA
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Alexander G. Huth
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
5
|
Ter Bekke M, Drijvers L, Holler J. Hand Gestures Have Predictive Potential During Conversation: An Investigation of the Timing of Gestures in Relation to Speech. Cogn Sci 2024; 48:e13407. [PMID: 38279899 DOI: 10.1111/cogs.13407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/29/2024]
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| |
Collapse
|
6
|
Nota N, Trujillo JP, Jacobs V, Holler J. Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Sci Rep 2023; 13:21295. [PMID: 38042876 PMCID: PMC10693605 DOI: 10.1038/s41598-023-48586-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 11/28/2023] [Indexed: 12/04/2023] Open
Abstract
In conversation, recognizing social actions (similar to 'speech acts') early is important to quickly understand the speaker's intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Vere Jacobs
- Faculty of Arts, Radboud University, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Cao N, Zhou L, Zhang S. The Effects of Social Status and Imposition on the Comprehension of Refusals in Chinese: An ERP Study. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:1989-2005. [PMID: 37347389 DOI: 10.1007/s10936-023-09984-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
This study aims to examine how real-time processing of information about the social status of interlocutors (high vs. low) and the imposition of making a refusal by manipulating the indirectness of invitation forms (declining direct invitations vs. declining indirect invitations) affects the interpretation of refusals in Chinese. The event-related potentials results showed that high-status invitees who decline invitations from low-status inviters elicited weaker N400 effects followed by late mitigated negative effects, while high imposition refusals elicited stronger N400 effects followed by increased late negativities. The two factors of social status and imposition functioned independently during the comprehension of refusal utterances. These findings suggest that individuals take the social status of interlocutors and the imposition of making a refusal into consideration as an utterance unfolds, while face-threatening contexts create inferential difficulties for reinterpreting the pragmatic implications of an utterance.
Collapse
Affiliation(s)
- Ningning Cao
- School of Foreign Languages, Northeast Normal University, Changchun, 130021, China
| | - Ling Zhou
- School of Foreign Languages, Northeast Normal University, Changchun, 130021, China.
| | - Shaojie Zhang
- School of Foreign Languages, Northeast Normal University, Changchun, 130021, China
| |
Collapse
|
8
|
Nota N, Trujillo JP, Holler J. Conversational Eyebrow Frowns Facilitate Question Identification: An Online Study Using Virtual Avatars. Cogn Sci 2023; 47:e13392. [PMID: 38058215 DOI: 10.1111/cogs.13392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/11/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023]
Abstract
Conversation is a time-pressured environment. Recognizing a social action (the ''speech act,'' such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers' intentions.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| |
Collapse
|
9
|
Linders GM, Louwerse MM. Surface and Contextual Linguistic Cues in Dialog Act Classification: A Cognitive Science View. Cogn Sci 2023; 47:e13367. [PMID: 37867372 DOI: 10.1111/cogs.13367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 09/26/2023] [Accepted: 10/09/2023] [Indexed: 10/24/2023]
Abstract
What role do linguistic cues on a surface and contextual level have in identifying the intention behind an utterance? Drawing on the wealth of studies and corpora from the computational task of dialog act classification, we studied this question from a cognitive science perspective. We first reviewed the role of linguistic cues in dialog act classification studies that evaluated model performance on three of the most commonly used English dialog act corpora. Findings show that frequency-based, machine learning, and deep learning methods all yield similar performance. Classification accuracies, moreover, generally do not explain which specific cues yield high performance. Using a cognitive science approach, in two analyses, we systematically investigated the role of cues in the surface structure of the utterance and cues of the surrounding context individually and combined. By comparing the explained variance, rather than the prediction accuracy of these cues in a logistic regression model, we found that (1) while surface and contextual linguistic cues can complement each other, surface linguistic cues form the backbone in human dialog act identification, (2) with word frequency statistics being particularly important for the dialog act, and (3) the similar trends across corpora, despite differences in the type of dialog, corpus setup, and dialog act tagset. The importance of surface linguistic cues in dialog act classification sheds light on how both computers and humans take advantage of these cues in speech act recognition.
Collapse
Affiliation(s)
- Guido M Linders
- Department of Cognitive Science & Artificial Intelligence, Tilburg University
- Department of Comparative Language Science, University of Zurich
| | - Max M Louwerse
- Department of Cognitive Science & Artificial Intelligence, Tilburg University
| |
Collapse
|
10
|
Nota N, Trujillo JP, Holler J. Specific facial signals associate with categories of social actions conveyed through questions. PLoS One 2023; 18:e0288104. [PMID: 37467253 DOI: 10.1371/journal.pone.0288104] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 06/20/2023] [Indexed: 07/21/2023] Open
Abstract
The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker's intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request "What time is it?", an invitation "Will you come to my party?" or a criticism "Are you crazy?"). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Zhang X, Pan X, Yang X, Yang Y. Conventionality determines the time course of indirect replies comprehension: An ERP study. BRAIN AND LANGUAGE 2023; 239:105253. [PMID: 37001318 DOI: 10.1016/j.bandl.2023.105253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 01/22/2023] [Accepted: 03/20/2023] [Indexed: 05/10/2023]
Abstract
Indirect language comprehension requires decoding both the literal meaning and the intended meaning of an utterance, in which pragmatic inference is involved. This study tests the role of conventionality in the time course of indirect reply processing by comparing conventional and non-conventional indirect replies with direct reply, respectively. We constructed discourses which consist of a context and a dialogue with one question (e.g., May I buy a necklace for you) and one reply (e.g., I really have too many). The reply utterance was segmented into three phrases and presented orderly for EEG recording, e.g., with the subject as the first phrase (e.g., I), the adverbial as the second phrase (e.g., really), and the predicate as the third phrase (e.g., have too many). Our results showed that for conventional indirect replies, the second phrase elicited a larger anterior negativity, and the third phrase elicited a larger anterior N400 compared with those in direct replies. By contrast, for the non-conventional indirect reply, only the third phrase elicited a larger late negativity than the direct replies. These findings suggest that conventionality determines the time course of the pragmatic inferences for the most relevant interpretation during indirect replies comprehension.
Collapse
Affiliation(s)
- Xiuping Zhang
- School of Psychology, Beijing Language and Culture University, Beijing 100083, China
| | - Xiaoxi Pan
- School of Psychology, Beijing Language and Culture University, Beijing 100083, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing 100872, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China.
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing 100101, China.
| |
Collapse
|
12
|
Tomasello R. Linguistic signs in action: The neuropragmatics of speech acts. BRAIN AND LANGUAGE 2023; 236:105203. [PMID: 36470125 PMCID: PMC9856589 DOI: 10.1016/j.bandl.2022.105203] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 09/07/2022] [Accepted: 11/07/2022] [Indexed: 06/05/2023]
Abstract
What makes human communication exceptional is the ability to grasp speaker's intentions beyond what is said verbally. How the brain processes communicative functions is one of the central concerns of the neurobiology of language and pragmatics. Linguistic-pragmatic theories define these functions as speech acts, and various pragmatic traits characterise them at the levels of propositional content, action sequence structure, related commitments and social aspects. Here I discuss recent neurocognitive studies, which have shown that the use of identical linguistic signs in conveying different communicative functions elicits distinct and ultra-rapid neural responses. Interestingly, cortical areas show differential involvement underlying various pragmatic features related to theory-of-mind, emotion and action for specific speech acts expressed with the same utterances. Drawing on a neurocognitive model, I posit that understanding speech acts involves the expectation of typical partner follow-up actions and that this predictive knowledge is immediately reflected in mind and brain.
Collapse
Affiliation(s)
- Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, 10099 Berlin, Germany.
| |
Collapse
|
13
|
Corps RE. What do we know about the mechanisms of response planning in dialog? PSYCHOLOGY OF LEARNING AND MOTIVATION 2023. [DOI: 10.1016/bs.plm.2023.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
14
|
Bendtz K, Ericsson S, Schneider J, Borg J, Bašnáková J, Uddén J. Individual Differences in Indirect Speech Act Processing Found Outside the Language Network. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:287-317. [PMID: 37215561 PMCID: PMC10158615 DOI: 10.1162/nol_a_00066] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/05/2022] [Indexed: 05/24/2023]
Abstract
Face-to-face communication requires skills that go beyond core language abilities. In dialogue, we routinely make inferences beyond the literal meaning of utterances and distinguish between different speech acts based on, e.g., contextual cues. It is, however, not known whether such communicative skills potentially overlap with core language skills or other capacities, such as theory of mind (ToM). In this functional magnetic resonance imaging (fMRI) study we investigate these questions by capitalizing on individual variation in pragmatic skills in the general population. Based on behavioral data from 199 participants, we selected participants with higher vs. lower pragmatic skills for the fMRI study (N = 57). In the scanner, participants listened to dialogues including a direct or an indirect target utterance. The paradigm allowed participants at the whole group level to (passively) distinguish indirect from direct speech acts, as evidenced by a robust activity difference between these speech acts in an extended language network including ToM areas. Individual differences in pragmatic skills modulated activation in two additional regions outside the core language regions (one cluster in the left lateral parietal cortex and intraparietal sulcus and one in the precuneus). The behavioral results indicate segregation of pragmatic skill from core language and ToM. In conclusion, contextualized and multimodal communication requires a set of interrelated pragmatic processes that are neurocognitively segregated: (1) from core language and (2) partly from ToM.
Collapse
Affiliation(s)
| | | | | | - Julia Borg
- Department of Psychology, Stockholm University, Sweden
| | - Jana Bašnáková
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Institute of Experimental Psychology, Centre of Social and Psychological Sciences SAS, Slovakia
| | - Julia Uddén
- Department of Psychology, Stockholm University, Sweden
- Department of Linguistics, Stockholm University, Sweden
| |
Collapse
|
15
|
Wang M, Tokimoto S, Song G, Ueno T, Koizumi M, Kiyama S. Different Neural Responses for Unfinished Sentence as a Conventional Indirect Refusal Between Native and Non-native Speakers: An Event-Related Potential Study. Front Psychol 2022; 13:806023. [PMID: 35310221 PMCID: PMC8929272 DOI: 10.3389/fpsyg.2022.806023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Refusal is considered a face-threatening act (FTA), since it contradicts the inviter’s expectations. In the case of Japanese, native speakers (NS) are known to prefer to leave sentences unfinished for a conventional indirect refusal. Successful comprehension of this indirect refusal depends on whether the addressee is fully conventionalized to the preference for syntactic unfinishedness so that they can identify the true intention of the refusal. Then, non-native speakers (NNS) who are not fully accustomed to the convention may be confused by the indirect style. In the present study, we used event-related potentials (ERPs) of electroencephalography in an attempt to differentiate the neural substrates for perceiving unfinished sentences in a conventionalized indirect refusal as an FTA between NS and NNS, in terms of the unfinishedness and indirectness of the critical sentence. In addition, we examined the effects of individual differences in mentalization, or the theory of mind, which refers to the ability to infer the mental states of others. We found several different ERP effects for these refusals between NS and NNS. NNS induced stronger P600 effects for the unfinishedness of the refusal sentences, suggesting their perceived syntactic anomaly. This was not evoked in NS. NNS also revealed the effects of N400 and P300 for the indirectness of refusal sentences, which can be interpreted as their increased processing load for pragmatic processing in the inexperienced contextual flow. We further found that the NNS’s individual mentalizing ability correlates with the effect of N400 mentioned above, indicating that lower mentalizers evoke higher N400 for indirect refusal. NS, on the contrary, did not yield these effects reflecting the increased pragmatic processing load. Instead, they evoked earlier ERPs of early posterior negativity (EPN) and P200, both of which are known as indices of emotional processing, for finished sentences of refusal than for unfinished ones. We interpreted these effects as a NS’s dispreference for finished sentences to realize an FTA, given that unfinished sentences are considered more polite and more conventionalized in Japanese social encounters. Overall, these findings provide evidence that a syntactic anomaly inherent in a cultural convention as well as individual mentalizing ability plays an important role in understanding an indirect speech act of face-threatening refusal.
Collapse
Affiliation(s)
- Min Wang
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Shingo Tokimoto
- Department of English Language Studies, Mejiro University, Tokyo, Japan
| | - Ge Song
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Takashi Ueno
- Department of Social Welfare, Faculty of Comprehensive Welfare, Tohoku Fukushi University, Sendai, Japan
| | - Masatoshi Koizumi
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | - Sachiko Kiyama
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| |
Collapse
|
16
|
Corps RE, Knudsen B, Meyer AS. Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition 2022; 223:105037. [PMID: 35123218 DOI: 10.1016/j.cognition.2022.105037] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 01/18/2022] [Accepted: 01/20/2022] [Indexed: 11/03/2022]
Abstract
Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.
Collapse
Affiliation(s)
- Ruth E Corps
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Birgit Knudsen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | |
Collapse
|
17
|
Petersen EB, MacDonald EN, Josefine Munch Sørensen A. The Effects of Hearing-Aid Amplification and Noise on Conversational Dynamics Between Normal-Hearing and Hearing-Impaired Talkers. Trends Hear 2022; 26:23312165221103340. [PMID: 35862280 PMCID: PMC9310272 DOI: 10.1177/23312165221103340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
There is a long-standing tradition to assess hearing-aid benefits using lab-based speech intelligibility tests. Towards a more everyday-like scenario, the current study investigated the effects of hearing-aid amplification and noise on face-to-face communication between two conversational partners. Eleven pairs, consisting of a younger normal-hearing (NH) and an older hearing-impaired (HI) participant, solved spot-the-difference tasks while their conversations were recorded. In a two-block randomized design, the tasks were solved in quiet or noise, both with and without the HI participant receiving hearing-aid amplification with active occlusion cancellation. In the presence of 70 dB SPL babble noise, participants had fewer, slower, and less well-timed turn-starts, while speaking louder with longer inter-pausal units (IPUs, stretches of continuous speech surrounded by silence) and reducing their articulation rates. All these changes are indicative of increased communication effort. The timing of turn-starts by the HI participants exhibited more variability than that of their NH conversational partners. In the presence of background noise, the timing of turn-starts by the HI participants became even more variable, and their NH partners spoke louder. When the HI participants were provided with hearing-aid amplification, their timing of turn-starts became faster, they increased their articulation rate, and they produced shorter IPUs, all indicating reduced communication effort. In conclusion, measures of the conversational dynamics showed that background noise increased the communication effort, especially for the HI participants, and that providing hearing-aid amplification caused the HI participant to behave more like their NH conversational partner, especially in quiet situations.
Collapse
Affiliation(s)
| | - Ewen N MacDonald
- Hearing Systems Group, Dept. of Health Technology, 5205Technical University of Denmark, Kongens Lyngby, Denmark.,Department of Systems Design Engineering, 8430University of Waterloo, Waterloo, ON, Canada
| | - A Josefine Munch Sørensen
- Hearing Systems Group, Dept. of Health Technology, 5205Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
18
|
Tomasello R, Grisoni L, Boux I, Sammler D, Pulvermüller F. OUP accepted manuscript. Cereb Cortex 2022; 32:4885-4901. [PMID: 35136980 PMCID: PMC9626830 DOI: 10.1093/cercor/bhab522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 11/16/2021] [Accepted: 12/17/2021] [Indexed: 11/20/2022] Open
Abstract
During conversations, speech prosody provides important clues about the speaker’s communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.
Collapse
Affiliation(s)
- Rosario Tomasello
- Address correspondence to Rosario Tomasello, Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
| | - Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| | - Daniela Sammler
- Research Group ‘Neurocognition of Music and Language’, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| |
Collapse
|
19
|
Bögels S, Torreira F. Turn-end Estimation in Conversational Turn-taking: The Roles of Context and Prosody. DISCOURSE PROCESSES 2021. [DOI: 10.1080/0163853x.2021.1986664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Sara Bögels
- Department of Communication and Cognition, Tilburg University
- Language and Cognition Department, Max Planck Institute for Psycholinguistics
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
| | - Francisco Torreira
- Language and Cognition Department, Max Planck Institute for Psycholinguistics
- Department of Linguistics, McGill University
| |
Collapse
|
20
|
Ji L. When politeness processing encounters failed syntactic/semantic processing. Acta Psychol (Amst) 2021; 219:103391. [PMID: 34412023 DOI: 10.1016/j.actpsy.2021.103391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 08/01/2021] [Accepted: 08/08/2021] [Indexed: 11/15/2022] Open
Abstract
Previous studies have elucidated the neural mechanism of syntactic/semantic processing and pragmatic processing. However, the exact mechanisms by which these two aspects of processing interact during language comprehension remain unknown. In this event-related brain potential study, we examined the interaction between politeness processing and local syntactic/semantic processing of a phrase. We used a full factorial design that crossed politeness consistency with local syntactic/semantic coherence. Politeness violations elicited a P200 effect in the 190-320 ms range, centro-parietally distributed positivity in the 360-866 ms range, and pure local syntactic/semantic violation elicited a broad distributed positivity in the 362-868 ms range. Crucially, we found that event-related potential responses elicited by combined politeness and syntactic/semantic violations resemble those elicited by separate syntactic/semantic violations. These results indicated that local syntactic/semantic processing has a functional primacy over politeness processing. Furthermore, our results support the blocking hypothesis from a politeness processing perspective instead of the independent hypothesis.
Collapse
Affiliation(s)
- Liyan Ji
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100048, China; School of Psychology, Fujian Normal University, Fuzhou 350117, China.
| |
Collapse
|
21
|
Nota N, Trujillo JP, Holler J. Facial Signals and Social Actions in Multimodal Face-to-Face Interaction. Brain Sci 2021; 11:1017. [PMID: 34439636 PMCID: PMC8392358 DOI: 10.3390/brainsci11081017] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/26/2021] [Indexed: 01/30/2023] Open
Abstract
In a conversation, recognising the speaker's social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
22
|
Licea-Haquet GL, Reyes-Aguilar A, Alcauter S, Giordano M. The Neural Substrate of Speech Act Recognition. Neuroscience 2021; 471:102-114. [PMID: 34332015 DOI: 10.1016/j.neuroscience.2021.07.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 07/20/2021] [Accepted: 07/21/2021] [Indexed: 11/30/2022]
Abstract
Pragmatic competence demands linguistic, but also communicative, social and cognitive competence. Successful use of language in social interaction requires mutual understanding of the speaker's intentions; without it, a conversation cannot proceed. The term speech act refers to what a speaker intends to accomplish when saying something. The purpose of this study was to contribute to the identification of the neural substrate of speech act recognition and to the characterization of the cognitive processes that may be involved. The recognition of speech acts resulted in greater activation of frontal regions, precuneus and posterior cingulate gyrus. From all cognitive and behavioral measures obtained, only the scores in mental flexibility predicted the change in blood oxygen level dependent (BOLD) signal in the precuneus. These results, support the idea that speech act recognition requires the inference of intention, executive functions, including memory and entails the activation of areas of social cognition that participate in several brain networks i.e., the Intention Processing, the Default Mode and Theory of Mind networks, and areas involved in planning and guiding behavior.
Collapse
Affiliation(s)
- G L Licea-Haquet
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología UNAM Campus Juriquilla, Querétaro, Mexico
| | - A Reyes-Aguilar
- Laboratorio de Neurocognición, Facultad de Psicología, Universidad Nacional Autónoma de México, CDMX, Mexico
| | - S Alcauter
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología UNAM Campus Juriquilla, Querétaro, Mexico
| | - M Giordano
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología UNAM Campus Juriquilla, Querétaro, Mexico.
| |
Collapse
|
23
|
Trujillo JP, Holler J. The Kinematics of Social Action: Visual Signals Provide Cues for What Interlocutors Do in Conversation. Brain Sci 2021; 11:996. [PMID: 34439615 PMCID: PMC8393665 DOI: 10.3390/brainsci11080996] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/23/2021] [Indexed: 11/17/2022] Open
Abstract
During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing-requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
24
|
Schulze C, Buttelmann D. Children understand communication intuitively, but indirect communication makes them think twice-Evidence from pupillometry and looking patterns. J Exp Child Psychol 2021; 206:105105. [PMID: 33636635 DOI: 10.1016/j.jecp.2021.105105] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 01/12/2021] [Accepted: 01/18/2021] [Indexed: 11/17/2022]
Abstract
Interpreting a speaker's communicative acts is a challenge children face permanently in everyday life. In doing so, they seem to understand direct communicative acts more easily than indirect communicative acts. The current study investigated which step in the processing of communicative acts might cause difficulties in understanding indirect communication. To assess the developmental trajectory of this phenomenon, we tested 3- and 5-year-old children (N = 105) using eye tracking and an object-choice task. The children watched videos that showed puppets during their everyday activities (e.g., pet care). For every activity, the puppets were asked which of two objects (e.g., rabbit or dog) they would rather have. The puppets responded either directly (e.g., "I want the rabbit") or indirectly (e.g., "I have a carrot"). Results showed that children chose the object intended by the puppets more often in the direct communication condition than in the indirect communication condition and that 5-year-olds chose correctly more than 3-year-olds. However, even though we found that children's pupil size increased while hearing the utterances, we found no effect for communication type before children had already decided on the correct object during object selection by looking at it. Only after this point-that is, only in children's further fixation patterns and reaction times-did differences for communication type occur. Thus, although children's object-choice performance suggests that indirect communication is harder to understand than direct communication, the cognitive demands during processing of both communication types seem similar. We discuss theoretical implications of these findings for developmental pragmatics in terms of a dual-process account of communication comprehension.
Collapse
Affiliation(s)
- Cornelia Schulze
- Department of Educational Psychology, Faculty of Education, University of Leipzig, D-04109 Leipzig, Germany; Leipzig Research Center for Early Child Development, University of Leipzig, D-04109 Leipzig, Germany.
| | - David Buttelmann
- Department of Developmental Psychology, Institute of Psychology, University of Bern, 3012 Bern, Switzerland
| |
Collapse
|
25
|
Jongman SR. The attentional demands of combining comprehension and production in conversation. PSYCHOLOGY OF LEARNING AND MOTIVATION 2021. [DOI: 10.1016/bs.plm.2021.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
26
|
Boux I, Tomasello R, Grisoni L, Pulvermüller F. Brain signatures predict communicative function of speech production in interaction. Cortex 2020; 135:127-145. [PMID: 33360757 DOI: 10.1016/j.cortex.2020.11.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 11/05/2020] [Accepted: 11/18/2020] [Indexed: 10/22/2022]
Abstract
People normally know what they want to communicate before they start speaking. However, brain indicators of communication are typically observed only after speech act onset, and it is unclear when any anticipatory brain activity prior to speaking might first emerge, along with the communicative intentions it possibly reflects. Here, we investigated brain activity prior to the production of different speech act types, request and naming actions performed by uttering single words embedded into language games with a partner, similar to natural communication. Starting ca. 600 msec before speech onset, an event-related potential maximal at fronto-central electrodes, which resembled the Readiness Potential, was larger when preparing requests compared to naming actions. Analysis of the cortical sources of this anticipatory brain potential suggests a relatively stronger involvement of fronto-central motor regions for requests, which may reflect the speaker's expectation of the partner actions typically following requests, e.g., the handing over of a requested object. Our results indicate that different neuronal circuits underlying the processing of different speech act types activate already before speaking. Results are discussed in light of previous work addressing the neural basis of speech act understanding and predictive brain indexes of language comprehension.
Collapse
Affiliation(s)
- Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany.
| | - Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
27
|
Vatanen A, Endo T, Yokomori D. Cross-Linguistic Investigation of Projection in Overlapping Agreements to Assertions: Stance-Taking as a Resource for Projection. DISCOURSE PROCESSES 2020. [DOI: 10.1080/0163853x.2020.1801317] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Anna Vatanen
- Research Unit for Languages and Literature, Faculty of Humanities, University of Oulu, Oulu, Finland
| | - Tomoko Endo
- Department of Language and Information Sciences, The University of Tokyo, Tokyo Japan
| | - Daisuke Yokomori
- Department of Linguistic Environment, Faculty of Languages and Cultures, Kyushu University, Fukuoka, Japan
| |
Collapse
|
28
|
Vergis N, Jiang X, Pell MD. Neural responses to interpersonal requests: Effects of imposition and vocally-expressed stance. Brain Res 2020; 1740:146855. [DOI: 10.1016/j.brainres.2020.146855] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 04/02/2020] [Accepted: 04/23/2020] [Indexed: 02/07/2023]
|
29
|
Donahoo SA, Lai VT. The mental representation and social aspect of expressives. Cogn Emot 2020; 34:1423-1438. [PMID: 32419627 DOI: 10.1080/02699931.2020.1764912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Despite increased focus on emotional language, research lacks for the most emotional language: Swearing. We used event-related potentials (ERPs) to investigate whether swear words have content distinct from function words, and if so, whether this content is emotional or social in nature. Stimuli included swear (e.g. shit, damn), negative but non-swear (e.g. kill, sick), open-class neutral (e.g. wood, lend), and closed-class neutral words (e.g. while, whom). Behaviourally, swears were recognised slower than valence- and arousal- matched negative words, meaning that there is more to the expressive dimension than merely a heightened emotional state. In ERPs, both swears and negative words elicited a larger positivity (250-550 ms) than open-class neutral words. Later, swears elicited a larger late positivity (550-750 ms) than negative words. We associate the earlier positivity effect with attention due to negative valence, and the later positivity effect with pragmatics due to social tabooness. Our findings suggest a view in which expressives are not merely function words or emotional words. Rather, expressives are emotionally and socially significant. Swears are more than what is indicated by valence ore arousal alone.
Collapse
Affiliation(s)
- Stanley A Donahoo
- Department of Linguistics, University of Arizona, Tucson, AZ, USA.,Cognitive Science Program, University of Arizona, Tucson, AZ, USA
| | - Vicky Tzuyin Lai
- Department of Psychology, University of Arizona, Tucson, AZ, USA.,Cognitive Science Program, University of Arizona, Tucson, AZ, USA
| |
Collapse
|
30
|
|
31
|
Neurophysiological evidence for rapid processing of verbal and gestural information in understanding communicative actions. Sci Rep 2019; 9:16285. [PMID: 31705052 PMCID: PMC6841672 DOI: 10.1038/s41598-019-52158-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 10/12/2019] [Indexed: 11/08/2022] Open
Abstract
During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210 ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes – compared with gesture-only understanding – thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously.
Collapse
|
32
|
A preliminary investigation of dispositional affect, the P300, and sentence processing. Brain Res 2019; 1721:146309. [PMID: 31247204 DOI: 10.1016/j.brainres.2019.146309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 05/09/2019] [Accepted: 06/23/2019] [Indexed: 11/22/2022]
Abstract
We examined whether dispositional affect modulated the relation between sentence processing and the P300 Event Related Potential (ERP) component. We used sentence stimuli from our previous study, where sentences started with subject nouns that were quantified e.g., Every kid… or not, as in The kid…, and continued with a direct object which was either singular, as in a tree, or plural the trees. In this Stroop-like task, participants read sentences presented in 1- and 2-word chunks, and were asked to identify the number of words on the screen at the target word tree(s), which was always presented alone (and never sentence-final). We replicated previous findings of a P300 effect, at the target tree(s), however, actual by-condition effects differed from previous work. Of interest, clear individual differences were apparent. Participants with relatively lower Positive Affect scores (as measured by the Positive and Negative Affect Schedule; PANAS), showed differential P300 responses to the control condition, Every/The kid climbed the tree. Thus, the present ERP findings demonstrate that dispositional affect modulated P300 effects. These findings suggest that, rather than relying on global heuristic cues for sentence meaning interpretation, these participants may be differentially sensitive to local (grammatical) cues which signify task relevance. We discuss our results in terms of theories of positive affect, where less positive individuals are differentially sensitive to local (grammatical) information.
Collapse
|
33
|
Holler J, Levinson SC. Multimodal Language Processing in Human Communication. Trends Cogn Sci 2019; 23:639-652. [PMID: 31235320 DOI: 10.1016/j.tics.2019.05.006] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 05/17/2019] [Accepted: 05/21/2019] [Indexed: 11/25/2022]
Abstract
The natural ecology of human language is face-to-face interaction comprising the exchange of a plethora of multimodal signals. Trying to understand the psycholinguistic processing of language in its natural niche raises new issues, first and foremost the binding of multiple, temporally offset signals under tight time constraints posed by a turn-taking system. This might be expected to overload and slow our cognitive system, but the reverse is in fact the case. We propose cognitive mechanisms that may explain this phenomenon and call for a multimodal, situated psycholinguistic framework to unravel the full complexities of human language processing.
Collapse
Affiliation(s)
- Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
34
|
Gambi C, Pickering MJ. Sensorimotor communication and language: Comment on "The body talks: Sensorimotor communication and its brain and kinematic signatures" by G. Pezzulo et al. Phys Life Rev 2019; 28:34-35. [PMID: 30738761 DOI: 10.1016/j.plrev.2019.01.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 01/28/2019] [Indexed: 11/30/2022]
Affiliation(s)
- Chiara Gambi
- School of Psychology, 70, Park Place, Cardiff University, CF10 3AT Cardiff, UK.
| | | |
Collapse
|
35
|
Gisladottir RS, Bögels S, Levinson SC. Oscillatory Brain Responses Reflect Anticipation during Comprehension of Speech Acts in Spoken Dialog. Front Hum Neurosci 2018; 12:34. [PMID: 29467635 PMCID: PMC5808328 DOI: 10.3389/fnhum.2018.00034] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2017] [Accepted: 01/22/2018] [Indexed: 11/16/2022] Open
Abstract
Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialog. Participants listened to short, spoken dialogs with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.
Collapse
Affiliation(s)
| | - Sara Bögels
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
36
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
37
|
Corps RE, Gambi C, Pickering MJ. Coordinating Utterances During Turn-Taking: The Role of Prediction, Response Preparation, and Articulation. DISCOURSE PROCESSES 2017. [DOI: 10.1080/0163853x.2017.1330031] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Ruth E. Corps
- Department of Psychology University of Edinburgh, Edinburgh, UK
| | - Chiara Gambi
- Department of Psychology University of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
38
|
Barthel M, Sauppe S, Levinson SC, Meyer AS. The Timing of Utterance Planning in Task-Oriented Dialogue: Evidence from a Novel List-Completion Paradigm. Front Psychol 2016; 7:1858. [PMID: 27990127 PMCID: PMC5131015 DOI: 10.3389/fpsyg.2016.01858] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 11/09/2016] [Indexed: 11/13/2022] Open
Abstract
In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speakers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their responses. German speakers heard a confederate describe sets of objects in utterances that either ended in a noun [e.g., Ich habe eine Tür und ein Fahrrad ("I have a door and a bicycle")] or a verb form [e.g., Ich habe eine Tür und ein Fahrrad besorgt ("I have gotten a door and a bicycle")], while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own displays with utterances such as Ich habe ein Ei ("I have an egg"). The results show that speakers begin to plan their turns as soon as sufficient information is available to do so, irrespective of further incoming words.
Collapse
Affiliation(s)
- Mathias Barthel
- Language and Cognition Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| | - Sebastian Sauppe
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands; Department of Comparative Linguistics, University of ZurichZurich, Switzerland
| | - Stephen C Levinson
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Antje S Meyer
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands; Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|
39
|
De Jaegher H, Peräkylä A, Stevanovic M. The co-creation of meaningful action: bridging enaction and interactional sociology. Philos Trans R Soc Lond B Biol Sci 2016; 371:20150378. [PMID: 27069055 PMCID: PMC4843616 DOI: 10.1098/rstb.2015.0378] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2016] [Indexed: 11/17/2022] Open
Abstract
What makes possible the co-creation of meaningful action? In this paper, we go in search of an answer to this question by combining insights from interactional sociology and enaction. Both research schools investigate social interactions as such, and conceptualize their organization in terms of autonomy. We ask what it could mean for an interaction to be autonomous, and discuss the structures and processes that contribute to and are maintained in the so-called interaction order. We also discuss the role played by individual vulnerability as well as the vulnerability of social interaction processes in the co-creation of meaningful action. Finally, we outline some implications of this interdisciplinary fraternization for the empirical study of social understanding, in particular in social neuroscience and psychology, pointing out the need for studies based on dynamic systems approaches on origins and references of coordination, and experimental designs to help understand human co-presence.
Collapse
Affiliation(s)
- Hanne De Jaegher
- Department of Logic and Philosophy of Science, IAS-Research Centre for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain Department of Informatics, Centre for Computational Neuroscience and Robotics, and Centre for Research in Cognitive Science, University of Sussex, Brighton, UK
| | - Anssi Peräkylä
- Department of Social Research, Finnish Center of Excellence on Intersubjectivity in Interaction, University of Helsinki, Helsinki, Finland
| | - Melisa Stevanovic
- Department of Social Research, Finnish Center of Excellence on Intersubjectivity in Interaction, University of Helsinki, Helsinki, Finland
| |
Collapse
|
40
|
Bögels S, Kendrick KH, Levinson SC. Never Say No... How the Brain Interprets the Pregnant Pause in Conversation. PLoS One 2015; 10:e0145474. [PMID: 26699335 PMCID: PMC4689543 DOI: 10.1371/journal.pone.0145474] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Accepted: 12/06/2015] [Indexed: 11/18/2022] Open
Abstract
In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay–conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however, this contrast disappeared in the delayed responses. 'No' responses, however, elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive–this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.
Collapse
Affiliation(s)
- Sara Bögels
- Language and Cognition Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- * E-mail:
| | - Kobin H. Kendrick
- Language and Cognition Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Stephen C. Levinson
- Language and Cognition Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
41
|
Levinson SC. Turn-taking in Human Communication--Origins and Implications for Language Processing. Trends Cogn Sci 2015; 20:6-14. [PMID: 26651245 DOI: 10.1016/j.tics.2015.10.010] [Citation(s) in RCA: 237] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 10/26/2015] [Accepted: 10/28/2015] [Indexed: 11/16/2022]
Abstract
Most language usage is interactive, involving rapid turn-taking. The turn-taking system has a number of striking properties: turns are short and responses are remarkably rapid, but turns are of varying length and often of very complex construction such that the underlying cognitive processing is highly compressed. Although neglected in cognitive science, the system has deep implications for language processing and acquisition that are only now becoming clear. Appearing earlier in ontogeny than linguistic competence, it is also found across all the major primate clades. This suggests a possible phylogenetic continuity, which may provide key insights into language evolution.
Collapse
Affiliation(s)
- Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
42
|
Levinson SC, Torreira F. Timing in turn-taking and its implications for processing models of language. Front Psychol 2015; 6:731. [PMID: 26124727 PMCID: PMC4464110 DOI: 10.3389/fpsyg.2015.00731] [Citation(s) in RCA: 130] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 05/16/2015] [Indexed: 12/03/2022] Open
Abstract
The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioral data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks et al. (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or 'project' as SSJ have it) the end of the current speaker's turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviorally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
Collapse
Affiliation(s)
- Stephen C. Levinson
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Francisco Torreira
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|