1
|
Ter Bekke M, Levinson SC, van Otterdijk L, Kühn M, Holler J. Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition 2024; 248:105806. [PMID: 38749291 DOI: 10.1016/j.cognition.2024.105806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 03/04/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024]
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | | | - Lina van Otterdijk
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Michelle Kühn
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| |
Collapse
|
2
|
Kim J, Hazan V, Tuomainen O, Davis C. Partner-directed gaze and co-speech hand gestures: effects of age, hearing loss and noise. Front Psychol 2024; 15:1324667. [PMID: 38882511 PMCID: PMC11178134 DOI: 10.3389/fpsyg.2024.1324667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 05/10/2024] [Indexed: 06/18/2024] Open
Abstract
Research on the adaptations talkers make to different communication conditions during interactive conversations has primarily focused on speech signals. We extended this type of investigation to two other important communicative signals, i.e., partner-directed gaze and iconic co-speech hand gestures with the aim of determining if the adaptations made by older adults differ from younger adults across communication conditions. We recruited 57 pairs of participants, comprising 57 primary talkers and 57 secondary ones. Primary talkers consisted of three groups: 19 older adults with mild Hearing Loss (older adult-HL); 17 older adults with Normal Hearing (older adult-NH); and 21 younger adults. The DiapixUK "spot the difference" conversation-based task was used to elicit conversions in participant pairs. One easy (No Barrier: NB) and three difficult communication conditions were tested. The three conditions consisted of two in which the primary talker could hear clearly, but the secondary talkers could not, due to multi-talker babble noise (BAB1) or a less familiar hearing loss simulation (HLS), and a condition in which both the primary and secondary talkers heard each other in babble noise (BAB2). For primary talkers, we measured mean number of partner-directed gazes; mean total gaze duration; and the mean number of co-speech hand gestures. We found a robust effects of communication condition that interacted with participant group. Effects of age were found for both gaze and gesture in BAB1, i.e., older adult-NH looked and gestured less than younger adults did when the secondary talker experienced babble noise. For hearing status, a difference in gaze between older adult-NH and older adult-HL was found for the BAB1 condition; for gesture this difference was significant in all three difficult communication conditions (older adult-HL gazed and gestured more). We propose the age effect may be due to a decline in older adult's attention to cues signaling how well a conversation is progressing. To explain the hearing status effect, we suggest that older adult's attentional decline is offset by hearing loss because these participants have learned to pay greater attention to visual cues for understanding speech.
Collapse
Affiliation(s)
- Jeesun Kim
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Valerie Hazan
- Speech Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Outi Tuomainen
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Chris Davis
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
3
|
Hansen TA, O’Leary RM, Svirsky MA, Wingfield A. Self-pacing ameliorates recall deficit when listening to vocoded discourse: a cochlear implant simulation. Front Psychol 2023; 14:1225752. [PMID: 38054180 PMCID: PMC10694252 DOI: 10.3389/fpsyg.2023.1225752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/07/2023] [Indexed: 12/07/2023] Open
Abstract
Introduction In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.
Collapse
Affiliation(s)
- Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, NY, United States
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
4
|
Shiell MM, Høy-Christensen J, Skoglund MA, Keidser G, Zaar J, Rotger-Griful S. Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4575-4589. [PMID: 37850878 DOI: 10.1044/2023_jslhr-22-00641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
PURPOSE There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. RESULTS We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. CONCLUSIONS MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.
Collapse
Affiliation(s)
| | | | - Martin A Skoglund
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Division of Automatic Control, Department of Electrical Engineering, The Institute of Technology, Linköping University, Sweden
| | - Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linneaus Center HEAD, Linköping University, Sweden
| | - Johannes Zaar
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Kongens Lyngby
| | | |
Collapse
|
5
|
Zhou J, Fredrickson BL. Listen to resonate: Better listening as a gateway to interpersonal positivity resonance through enhanced sensory connection and perceived safety. Curr Opin Psychol 2023; 53:101669. [PMID: 37619451 DOI: 10.1016/j.copsyc.2023.101669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/18/2023] [Accepted: 07/21/2023] [Indexed: 08/26/2023]
Abstract
Although often experienced individually, emotions are at times co-experienced with others, collectively. One type of collective emotion, termed positivity resonance, refers to coexperienced positive affect accompanied by caring non-verbal behavioral synchrony and biological synchrony across persons. Growing evidence illustrates the contributions of positivity resonance to individual, relational, and community well-being. Two conditions theorized as conducive for the emergence of positivity resonance are real-time sensory connection and perceived safety. Here, we explore listening as an interpersonal process that can serve to enhance real-time sensory connection and perceived safety and thereby increase positivity resonance among conversation partners. Specifically, we present evidence that connects listening to direct gaze (i.e., real-time sensory connection) and psychological safety (i.e., perceived safety). We close by offering a framework to guide future research that can test whether and how conversational listening functions to create more moments of positivity resonance in interpersonal contexts.
Collapse
Affiliation(s)
- Jieni Zhou
- Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, NC, USA.
| | - Barbara L Fredrickson
- Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
6
|
Cross MP, Acevedo AM, Hunter JF. A Critique of Automated Approaches to Code Facial Expressions: What Do Researchers Need to Know? AFFECTIVE SCIENCE 2023; 4:500-505. [PMID: 37744972 PMCID: PMC10514002 DOI: 10.1007/s42761-023-00195-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 06/03/2023] [Indexed: 09/26/2023]
Abstract
Facial expression recognition software is becoming more commonly used by affective scientists to measure facial expressions. Although the use of this software has exciting implications, there are persistent and concerning issues regarding the validity and reliability of these programs. In this paper, we highlight three of these issues: biases of the programs against certain skin colors and genders; the common inability of these programs to capture facial expressions made in non-idealized conditions (e.g., "in the wild"); and programs being forced to adopt the underlying assumptions of the specific theory of emotion on which each software is based. We then discuss three directions for the future of affective science in the area of automated facial coding. First, researchers need to be cognizant of exactly how and on which data sets the machine learning algorithms underlying these programs are being trained. In addition, there are several ethical considerations, such as privacy and data storage, surrounding the use of facial expression recognition programs. Finally, researchers should consider collecting additional emotion data, such as body language, and combine these data with facial expression data in order to achieve a more comprehensive picture of complex human emotions. Facial expression recognition programs are an excellent method of collecting facial expression data, but affective scientists should ensure that they recognize the limitations and ethical implications of these programs.
Collapse
Affiliation(s)
- Marie P. Cross
- Department of Biobehavioral Health, Pennsylvania State University, University Park, PA USA
| | - Amanda M. Acevedo
- Basic Biobehavioral and Psychological Sciences Branch, National Cancer Institute, Rockville, MD USA
| | - John F. Hunter
- Department of Psychology, Chapman University, Orange, CA USA
| |
Collapse
|
7
|
Hall SS, Britton TC. Differential Effects of a Behavioral Treatment Probe on Social Gaze Behavior in Fragile X Syndrome and Non-Syndromic Autism Spectrum Disorder. J Autism Dev Disord 2023:10.1007/s10803-023-05919-6. [PMID: 37142899 DOI: 10.1007/s10803-023-05919-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2023] [Indexed: 05/06/2023]
Abstract
The purpose of this study was to examine potential differences in social learning between individuals with fragile X syndrome (FXS), the leading known inherited cause of intellectual disability, and individuals with non-syndromic autism spectrum disorder (ASD). Thirty school-aged males with FXS and 26 age and symptom-matched males with non-syndromic ASD, were administered a behavioral treatment probe designed to improve levels of social gaze during interactions with others. The treatment probe was administered by a trained behavior therapist over two days in our laboratory and included reinforcement of social gaze in two alternating training conditions - looking while listening and looking while speaking. Prior to each session, children in each group were taught progressive muscle relaxation and breathing techniques to counteract potential increased hyperarousal. Measures included the rate of learning in each group during treatment, in addition to levels of social gaze and heart rate obtained during administration of a standardized social conversation task administered prior to and following the treatment probe. Results showed that learning rates obtained during administration of the treatment probe were significantly less steep and less variable for males with FXS compared to males with non-syndromic ASD. Significant improvements in social gaze were also observed for males with FXS during the social conversation task. There was no effect of the treatment probe on heart rate in either group. These data reveal important differences in social learning between the two groups and have implications for early interventions in the two conditions.
Collapse
Affiliation(s)
- Scott S Hall
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
| | - Tobias C Britton
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Howes C, Lavelle M. Quirky conversations: how people with a diagnosis of schizophrenia do dialogue differently. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210480. [PMID: 36871591 PMCID: PMC9985960 DOI: 10.1098/rstb.2021.0480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 10/11/2022] [Indexed: 03/07/2023] Open
Abstract
People with a diagnosis of schizophrenia (PSz) have difficulty engaging in social interaction, but little research has focused on dialogues involving PSz interacting with partners who are unaware of their diagnosis. Using quantitative and qualitative methods on a unique corpus of triadic dialogues of PSz first social encounters, we show that turn-taking is disrupted in dialogues involving a PSz. Specifically, there are on average longer gaps between turns in groups which contain a PSz compared to those which do not, particularly when the speaker switch occurs from one control (C) participant to the other. Furthermore, the expected link between gesture and repair is not present in dialogues with a PSz, particularly for C participants interacting with a PSz. As well as offering some insights into how the presence of a PSz affects an interaction, our results also demonstrate the flexibility of our mechanisms for interaction. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Christine Howes
- Department of Philosophy, Linguistics and Theory of Science, University of Gothenburg, 405 30 Gothenburg, Sweden
| | - Mary Lavelle
- School of Psychology, Queens University, Belfast BT7 1NN, UK
| |
Collapse
|
9
|
Kendrick KH, Holler J, Levinson SC. Turn-taking in human face-to-face interaction is multimodal: gaze direction and manual gestures aid the coordination of turn transitions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210473. [PMID: 36871587 PMCID: PMC9985971 DOI: 10.1098/rstb.2021.0473] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 01/27/2023] [Indexed: 03/07/2023] Open
Abstract
Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Kobin H. Kendrick
- Department of Language and Linguistic Science, University of York, York YO10 5DD, UK
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, Gelderland, The Netherlands
| | - Stephen C. Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, Gelderland, The Netherlands
| |
Collapse
|
10
|
When Attentional and Politeness Demands Clash: The Case of Mutual Gaze Avoidance and Chin Pointing in Quiahije Chatino. JOURNAL OF NONVERBAL BEHAVIOR 2023. [DOI: 10.1007/s10919-022-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractPointing with the chin is a practice attested worldwide: it is an effective and highly recognizable device for re-orienting the attention of the addressee. For the chin point to be observed, the addressee must attend carefully to the movements of the sender’s head. This demand comes into conflict with the politeness norms of many cultures, since these often require conversationalists to avoid meeting the gaze of their interlocutor, and can require them to look away from their interlocutor’s face and head. In this paper we explore how the chin point is successfully used in just such a culture, among the Chatino indigenous group of Oaxaca, Mexico. We analyze interactions between multiple dyads of Chatino speakers, examining how senders invite visual attention to the pointing gesture, and how addressees signal that attention, while both participants avoid stretches of mutual gaze. We find that in the Chatino context, the senior (or higher-status) party to the conversation is highly consistent in training their gaze away from their interlocutor. This allows their interlocutor to give visual attention to their face without the risk of meeting the gaze of a higher-status sender, and facilitates close attention to head movements including the chin point.Abstracts in Spanish and Quiahije Chatino are published as appendices.Se incluyen como apéndices resúmenes en español y en el chatino de San Juan Quiahije.SonG ktyiC reC inH, ngyaqC skaE ktyiC noE ndaH sonB naF ngaJ noI ngyaqC loE ktyiC reC, ngyaqC ranF chaqE xlyaK qoE chaqF jnyaJ noA ndywiqA renqA KchinA KyqyaC.
Collapse
|
11
|
O’Leary RM, Neukam J, Hansen TA, Kinney AJ, Capach N, Svirsky MA, Wingfield A. Strategic Pauses Relieve Listeners from the Effort of Listening to Fast Speech: Data Limited and Resource Limited Processes in Narrative Recall by Adult Users of Cochlear Implants. Trends Hear 2023; 27:23312165231203514. [PMID: 37941344 PMCID: PMC10637151 DOI: 10.1177/23312165231203514] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/11/2023] [Accepted: 09/08/2023] [Indexed: 11/10/2023] Open
Abstract
Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.
Collapse
Affiliation(s)
- Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Jonathan Neukam
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | | | - Nicole Capach
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
12
|
Li M, Guo F, Wang X, Chen J, Ham J. Effects of robot gaze and voice human-likeness on users’ subjective perception, visual attention, and cerebral activity in voice conversations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
13
|
Palmer CJ, Clifford CWG. Spatial selectivity in adaptation to gaze direction. Proc Biol Sci 2022; 289:20221230. [PMID: 35946160 PMCID: PMC9380130 DOI: 10.1098/rspb.2022.1230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 07/19/2022] [Indexed: 01/22/2023] Open
Abstract
A person's focus of attention is conveyed by the direction of their eyes and face, providing a simple visual cue fundamental to social interaction. A growing body of research examines the visual mechanisms that encode the direction of another person's gaze as we observe them. Here we investigate the spatial receptive field properties of these mechanisms, by testing the spatial selectivity of sensory adaptation to gaze direction. Human observers were adapted to faces with averted gaze presented in one visual hemifield, then tested in their perception of gaze direction for faces presented in the same or opposite hemifield. Adaptation caused strong, repulsive perceptual aftereffects, but only for faces presented in the same hemifield as the adapter. This occurred even though adapting and test stimuli were in the same external location across saccades. Hence, there was clear evidence for retinotopic adaptation and a relative lack of either spatiotopic or spatially invariant adaptation. These results indicate that adaptable representations of gaze direction in the human visual system have retinotopic spatial receptive fields. This strategy of coding others' direction of gaze with positional specificity relative to one's own eye position may facilitate key functions of gaze perception, such as socially cued shifts in visual attention.
Collapse
Affiliation(s)
- Colin J. Palmer
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia
| | | |
Collapse
|
14
|
Palmer CJ, Bracken SG, Otsuka Y, Clifford CWG. Is there a 'zone of eye contact' within the borders of the face? Cognition 2021; 220:104981. [PMID: 34920299 DOI: 10.1016/j.cognition.2021.104981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 12/01/2021] [Accepted: 12/02/2021] [Indexed: 11/25/2022]
Abstract
Eye contact is a salient feature of everyday interactions, yet it is not obvious what the physical conditions are under which we feel that we have eye contact with another person. Here we measure the range of locations that gaze can fall on a person's face to elicit a sense of eye contact. Participants made judgements about eye contact while viewing rendered images of faces with finely-varying gaze direction at a close interpersonal distance (50 cm). The 'zone of eye contact' tends to peak between the two eyes and is often surprisingly narrower than the observer's actual eye region. Indeed, the zone tends to extend further across the face in height than in width. This shares an interesting parallel with the 'cyclopean eye' of visual perspective - our sense of looking out from a single point in space despite the physical separation of our two eyes. The distribution of eye-contact strength across the face can be modelled at the individual-subject level as a 2D Gaussian function. Perception of eye contact is more precise than the sense of having one's face looked at, which captures a wider range of gaze locations in both the horizontal and vertical dimensions, at least at the close viewing distance used in the present study. These features of eye-contact perception are very similar cross-culturally, tested here in Australian and Japanese university students. However, the shape and position of the zone of eye contact does vary depending on recent sensory experience: adaptation to faces with averted gaze causes a pronounced shift and widening of the zone across the face, and judgements about eye contact also show a positive serial dependence. Together, these results provide insight into the conditions under which eye contact is felt, with respect to face morphology, culture, and sensory context.
Collapse
Affiliation(s)
- Colin J Palmer
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia.
| | - Sophia G Bracken
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia
| | - Yumiko Otsuka
- Department of Humanities and Social Sciences, Ehime University, Matsuyama, Ehime, Japan; Faculty of Science and Engineering, Waseda University, Japan
| | | |
Collapse
|