1
|
Zadoorian S, Rosenblum LD. The Benefit of Bimodal Training in Voice Learning. Brain Sci 2023; 13:1260. [PMID: 37759861 PMCID: PMC10526927 DOI: 10.3390/brainsci13091260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
It is known that talkers can be recognized by listening to their specific vocal qualities-breathiness and fundamental frequencies. However, talker identification can also occur by focusing on the talkers' unique articulatory style, which is known to be available auditorily and visually and can be shared across modalities. Evidence shows that voices heard while seeing talkers' faces are later recognized better on their own compared to the voices heard alone. The present study investigated whether the facilitation of voice learning through facial cues relies on talker-specific articulatory or nonarticulatory facial information. Participants were initially trained to learn the voices of ten talkers presented either on their own or together with (a) an articulating face, (b) a static face, or (c) an isolated articulating mouth. Participants were then tested on recognizing the voices on their own regardless of their training modality. Consistent with previous research, voices learned with articulating faces were recognized better on their own compared to voices learned alone. However, isolated articulating mouths did not provide an advantage in learning the voices. The results demonstrated that learning voices while seeing faces resulted in better voice learning compared to the voices learned alone.
Collapse
|
2
|
Lavan N, Ramanik Bamaniya N, Muse M, Price RLM, Mareschal I. The effects of the presence of a face and direct eye gaze on voice identity learning. Br J Psychol 2023; 114:537-549. [PMID: 36690438 PMCID: PMC10952776 DOI: 10.1111/bjop.12633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/07/2022] [Accepted: 01/06/2023] [Indexed: 01/25/2023]
Abstract
We rarely become familiar with the voice of another person in isolation but usually also have access to visual identity information, thus learning to recognize their voice and face in parallel. There are conflicting findings as to whether learning to recognize voices in audiovisual vs audio-only settings is advantageous or detrimental to learning. One prominent finding shows that the presence of a face overshadows the voice, hindering voice identity learning by capturing listeners' attention (Face Overshadowing Effect; FOE). In the current study, we tested the proposal that the effect of audiovisual training on voice identity learning is driven by attentional processes. Participants learned to recognize voices through either audio-only training (Audio-Only) or through three versions of audiovisual training, where a face was presented alongside the voices. During audiovisual training, the faces were either looking at the camera (Direct Gaze), were looking to the side (Averted Gaze) or had closed eyes (No Gaze). We found a graded effect of gaze on voice identity learning: Voice identity recognition was most accurate after audio-only training and least accurate after audiovisual training including direct gaze, constituting a FOE. While effect sizes were overall small, the magnitude of FOE was halved for the Averted and No Gaze conditions. With direct gaze being associated with increased attention capture compared to averted or no gaze, the current findings suggest that incidental attention capture at least partially underpins the FOE. We discuss these findings in light of visual dominance effects and the relative informativeness of faces vs voices for identity perception.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Biological and Experimental Psychology, School of Biological and Behavioural SciencesQueen Mary University of LondonLondonUK
| | - Nisha Ramanik Bamaniya
- Department of Biological and Experimental Psychology, School of Biological and Behavioural SciencesQueen Mary University of LondonLondonUK
| | - Moha‐Maryam Muse
- Department of Biological and Experimental Psychology, School of Biological and Behavioural SciencesQueen Mary University of LondonLondonUK
| | - Raffaella Lucy Monica Price
- Department of Biological and Experimental Psychology, School of Biological and Behavioural SciencesQueen Mary University of LondonLondonUK
| | - Isabelle Mareschal
- Department of Biological and Experimental Psychology, School of Biological and Behavioural SciencesQueen Mary University of LondonLondonUK
| |
Collapse
|
3
|
Hernández Blasi C, Bjorklund DF, Agut S, Lozano Nomdedeu F, Martínez MÁ. Young children's attributes are better conveyed by voices than by faces. J Exp Child Psychol 2023; 228:105606. [PMID: 36535204 DOI: 10.1016/j.jecp.2022.105606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 11/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
The purpose of this study was to explore how young children's vocal and facial cues contribute to conveying to adults important information about children's attributes when presented together. In particular, the study aimed to disentangle whether children's vocal or facial cues, if either, are more dominant when both types of cues are displayed in a contradictory mode. To do this, we assigned 127 college students to one of three between-participants conditions. In the Voices-Only condition, participants listened to four pairs of synthetized voices simulating the voices of 4-5-year-old and 9-10-year-old children verbalizing a neutral-content sentence. Participants needed to indicate which voice was better associated with a series of 14 attributes organized into four trait dimensions (Positive Affect, Negative Affect, Intelligence, and Helpless), potentially meaningful in young child-adult interactions. In the Consistent condition, the same four pairs of voices delivered in the Voices-Only condition were presented jointly with morphed photographs of children's faces of equivalent age. In the Inconsistent condition, the four pairs of voices and faces were paired in a contradictory manner (immature voices with mature faces vs. mature voices with immature faces). Results revealed that vocal cues were more effective than facial cues in conveying young children's attributes to adults and that women were more efficient (i.e., faster) than men in responding to children's cues. These results confirm and extend previous evidence on the relevance of children's vocal cues to signaling important information about children's attributes and needs during their first 6 years of life.
Collapse
Affiliation(s)
| | - David F Bjorklund
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Sonia Agut
- Departamento de Psicología, Universitat Jaume I, 12071 Castellón, Spain
| | | | | |
Collapse
|
4
|
Schreibelmayr S, Mara M. Robot Voices in Daily Life: Vocal Human-Likeness and Application Context as Determinants of User Acceptance. Front Psychol 2022; 13:787499. [PMID: 35645911 PMCID: PMC9136288 DOI: 10.3389/fpsyg.2022.787499] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 03/24/2022] [Indexed: 11/13/2022] Open
Abstract
The growing popularity of speech interfaces goes hand in hand with the creation of synthetic voices that sound ever more human. Previous research has been inconclusive about whether anthropomorphic design features of machines are more likely to be associated with positive user responses or, conversely, with uncanny experiences. To avoid detrimental effects of synthetic voice design, it is therefore crucial to explore what level of human realism human interactors prefer and whether their evaluations may vary across different domains of application. In a randomized laboratory experiment, 165 participants listened to one of five female-sounding robot voices, each with a different degree of human realism. We assessed how much participants anthropomorphized the voice (by subjective human-likeness ratings, a name-giving task and an imagination task), how pleasant and how eerie they found it, and to what extent they would accept its use in various domains. Additionally, participants completed Big Five personality measures and a tolerance of ambiguity scale. Our results indicate a positive relationship between human-likeness and user acceptance, with the most realistic sounding voice scoring highest in pleasantness and lowest in eeriness. Participants were also more likely to assign real human names to the voice (e.g., “Julia” instead of “T380”) if it sounded more realistic. In terms of application context, participants overall indicated lower acceptance of the use of speech interfaces in social domains (care, companionship) than in others (e.g., information & navigation), though the most human-like voice was rated significantly more acceptable in social applications than the remaining four. While most personality factors did not prove influential, openness to experience was found to moderate the relationship between voice type and user acceptance such that individuals with higher openness scores rated the most human-like voice even more positively. Study results are discussed in the light of the presented theory and in relation to open research questions in the field of synthetic voice design.
Collapse
|
5
|
Hernández Blasi C, Bjorklund DF, Agut S, Lozano Nomdedeu F, Martínez MÁ. Voices as Cues to Children's Needs for Caregiving. HUMAN NATURE (HAWTHORNE, N.Y.) 2022; 33:22-42. [PMID: 34881403 PMCID: PMC8964562 DOI: 10.1007/s12110-021-09418-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/05/2021] [Indexed: 11/26/2022]
Abstract
The aim of this study was to explore the role of voices as cues to adults of children’s needs for potential caregiving during early childhood. To this purpose, 74 college students listened to pairs of 5-year-old versus 10-year-old children verbalizing neutral-content sentences and indicated which voice was better associated with each of 14 traits, potentially meaningful in interactions between young children and adults. Results indicated that children with immature voices were perceived more positively and as being more helpless than children with mature voices. Children’s voices, regardless of the content of speech, seem to be a powerful source of information about children’s need for caregiving for parents and others during the first six years of life.
Collapse
Affiliation(s)
| | | | - Sonia Agut
- Departamento de Psicología, Universitat Jaume I, 12071, Castellón, Spain
| | | | | |
Collapse
|
6
|
Lavan N, Collins MRN, Miah JFM. Audiovisual identity perception from naturally-varying stimuli is driven by visual information. Br J Psychol 2021; 113:248-263. [PMID: 34490897 DOI: 10.1111/bjop.12531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 07/19/2021] [Indexed: 11/30/2022]
Abstract
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| | - Madeleine Rose Niamh Collins
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| | - Jannatul Firdaus Monisha Miah
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| |
Collapse
|
7
|
Unimodal and cross-modal identity judgements using an audio-visual sorting task: Evidence for independent processing of faces and voices. Mem Cognit 2021; 50:216-231. [PMID: 34254274 PMCID: PMC8763756 DOI: 10.3758/s13421-021-01198-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2021] [Indexed: 11/18/2022]
Abstract
Unimodal and cross-modal information provided by faces and voices contribute to identity percepts. To examine how these sources of information interact, we devised a novel audio-visual sorting task in which participants were required to group video-only and audio-only clips into two identities. In a series of three experiments, we show that unimodal face and voice sorting were more accurate than cross-modal sorting: While face sorting was consistently most accurate followed by voice sorting, cross-modal sorting was at chancel level or below. In Experiment 1, we compared performance in our novel audio-visual sorting task to a traditional identity matching task, showing that unimodal and cross-modal identity perception were overall moderately more accurate than the traditional identity matching task. In Experiment 2, separating unimodal from cross-modal sorting led to small improvements in accuracy for unimodal sorting, but no change in cross-modal sorting performance. In Experiment 3, we explored the effect of minimal audio-visual training: Participants were shown a clip of the two identities in conversation prior to completing the sorting task. This led to small, nonsignificant improvements in accuracy for unimodal and cross-modal sorting. Our results indicate that unfamiliar face and voice perception operate relatively independently with no evidence of mutual benefit, suggesting that extracting reliable cross-modal identity information is challenging.
Collapse
|
8
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
9
|
Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases. Atten Percept Psychophys 2021; 83:2205-2216. [PMID: 33797024 PMCID: PMC8213568 DOI: 10.3758/s13414-021-02290-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2021] [Indexed: 11/08/2022]
Abstract
Previous studies have shown that face-voice matching accuracy is more consistently above chance for dynamic (i.e. speaking) faces than for static faces. This suggests that dynamic information can play an important role in informing matching decisions. We initially asked whether this advantage for dynamic stimuli is due to shared information across modalities that is encoded in articulatory mouth movements. Participants completed a sequential face-voice matching task with (1) static images of faces, (2) dynamic videos of faces, (3) dynamic videos where only the mouth was visible, and (4) dynamic videos where the mouth was occluded, in a well-controlled stimulus set. Surprisingly, after accounting for random variation in the data due to design choices, accuracy for all four conditions was at chance. Crucially, however, exploratory analyses revealed that participants were not responding randomly, with different patterns of response biases being apparent for different conditions. Our findings suggest that face-voice identity matching may not be possible with above-chance accuracy but that analyses of response biases can shed light upon how people attempt face-voice matching. We discuss these findings with reference to the differential functional roles for faces and voices recently proposed for multimodal person perception.
Collapse
|
10
|
Smith HMJ, Andrews S, Baguley TS, Colloff MF, Davis JP, White D, Rockey JC, Flowe HD. Performance of typical and superior face recognizers on a novel interactive face matching procedure. Br J Psychol 2021; 112:964-991. [PMID: 33760225 DOI: 10.1111/bjop.12499] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 12/01/2020] [Indexed: 12/11/2022]
Abstract
Unfamiliar simultaneous face matching is error prone. Reducing incorrect identification decisions will positively benefit forensic and security contexts. The absence of view-independent information in static images likely contributes to the difficulty of unfamiliar face matching. We tested whether a novel interactive viewing procedure that provides the user with 3D structural information as they rotate a facial image to different orientations would improve face matching accuracy. We tested the performance of 'typical' (Experiment 1) and 'superior' (Experiment 2) face recognizers, comparing their performance using high-quality (Experiment 3) and pixelated (Experiment 4) Facebook profile images. In each trial, participants responded whether two images featured the same person with one of these images being either a static face, a video providing orientation information, or an interactive image. Taken together, the results show that fluid orientation information and interactivity prompt shifts in criterion and support matching performance. Because typical and superior face recognizers both benefited from the structural information provided by the novel viewing procedures, our results point to qualitatively similar reliance on pictorial encoding in these groups. This also suggests that interactive viewing tools can be valuable in assisting face matching in high-performing practitioner groups.
Collapse
Affiliation(s)
| | - Sally Andrews
- Department of Psychology, Nottingham Trent University, UK
| | - Thom S Baguley
- Department of Psychology, Nottingham Trent University, UK
| | | | - Josh P Davis
- School of Human Sciences, Institute of Lifecourse Development, University of Greenwich, London, UK
| | - David White
- School of Psychology, UNSW Sydney, New South Wales, Australia
| | - James C Rockey
- Department of Economics, University of Birmingham, Birmingham, UK
| | | |
Collapse
|
11
|
Daniele M, Fasoli F, Antonio R, Sulpizio S, Maass A. Gay Voice: Stable Marker of Sexual Orientation or Flexible Communication Device? ARCHIVES OF SEXUAL BEHAVIOR 2020; 49:2585-2600. [PMID: 32617773 DOI: 10.1007/s10508-020-01771-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 06/02/2020] [Accepted: 06/10/2020] [Indexed: 06/11/2023]
Abstract
Listeners rely on vocal features when guessing others' sexual orientation. What is less clear is whether speakers modulate their voice to emphasize or to conceal their sexual orientation. We hypothesized that gay individuals adapt their voices to the social context, either emphasizing or disguising their sexual orientation. In Study 1 (n = 20 speakers, n = 383 Italian listeners and n = 373 British listeners), using a simulated conversation paradigm, we found that gay speakers modulated their voices depending on the interlocutor, sounding more gay when speaking to a person with whom they have had an easy (vs. difficult or no) coming out. Although straight speakers were always clearly perceived as heterosexual, their voice perception also varied depending on the interlocutor. Study 2 (n = 14 speakers and n = 309 listeners), comparing the voices of young YouTubers before and after their public coming out, showed a voice modulation as a function of coming out. The voices of gay YouTubers sounded more gay after coming out, whereas those of age-matched straight control male speakers sounded increasingly heterosexual over time. Combining experimental and archival methods, this research suggests that gay speakers modulate their voices flexibly depending on their relation with the interlocutor and as a consequence of their public coming out.
Collapse
Affiliation(s)
- Maddalena Daniele
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padova, Via Venezia 8, 35122, Padua, Italy.
| | - Fabio Fasoli
- School of Psychology, University of Surrey, Guildford, UK
- Centro de Investigação e Intervenção Social, Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Raquel Antonio
- Centro de Investigação e Intervenção Social, Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Simone Sulpizio
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Anne Maass
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padova, Via Venezia 8, 35122, Padua, Italy
| |
Collapse
|
12
|
Face-voice space: Integrating visual and auditory cues in judgments of person distinctiveness. Atten Percept Psychophys 2020; 82:3710-3727. [PMID: 32696231 DOI: 10.3758/s13414-020-02084-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Faces and voices each convey multiple cues enabling us to tell people apart. Research on face and voice distinctiveness commonly utilizes multidimensional space to represent these complex, perceptual abilities. We extend this framework to examine how a combined face-voice space would relate to its constituent face and voice spaces. Participants rated videos of speakers for their dissimilarity in face only, voice only, and face-voice together conditions. Multiple dimensional scaling (MDS) and regression analyses showed that whereas face-voice space more closely resembled face space, indicating visual dominance, face-voice distinctiveness was best characterized by a multiplicative integration of face-only and voice-only distinctiveness, indicating that auditory and visual cues are used interactively in person-distinctiveness judgments. Further, the multiplicative integration could not be explained by the small correlation found between face-only and voice-only distinctiveness. As an exploratory analysis, we next identified auditory and visual features that correlated with the dimensions in the MDS solutions. Features pertaining to facial width, lip movement, spectral centroid, fundamental frequency, and loudness variation were identified as important features in face-voice space. We discuss the implications of our findings in terms of person perception, recognition, and face-voice matching abilities.
Collapse
|
13
|
Huestegge SM. Matching Unfamiliar Voices to Static and Dynamic Faces: No Evidence for a Dynamic Face Advantage in a Simultaneous Presentation Paradigm. Front Psychol 2019; 10:1957. [PMID: 31507500 PMCID: PMC6716535 DOI: 10.3389/fpsyg.2019.01957] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/08/2019] [Indexed: 11/13/2022] Open
Abstract
Previous research has demonstrated that humans are able to match unfamiliar voices to corresponding faces and vice versa. It has been suggested that this matching ability might be based on common underlying factors that have a characteristic impact on both faces and voices. Some researchers have additionally assumed that dynamic facial information might be especially relevant to successfully match faces to voices. In the present study, static and dynamic face-voice matching ability was compared in a simultaneous presentation paradigm. Additionally, a procedure (matching additionally supported by incidental association learning) was implemented which allowed for reliably excluding participants that did not pay sufficient attention to the task. A comparison of performance between static and dynamic face-voice matching suggested a lack of substantial differences in matching ability, suggesting that dynamic (as opposed to mere static) facial information does not contribute meaningfully to face-voice matching performance. Importantly, this conclusion was not merely derived from the lack of a statistically significant group difference in matching performance (which could principally be explained by assuming low statistical power), but from a Bayesian analysis as well as from an analysis of the 95% confidence interval (CI) of the actual effect size. The extreme border of this CI suggested a maximally plausible dynamic face advantage of less than four percentage points, which was considered way too low to indicate any theoretically meaningful dynamic face advantage. Implications regarding the underlying mechanisms of face-voice matching are discussed.
Collapse
Affiliation(s)
- Sujata M. Huestegge
- Department of Special Education and Speech-Language Pathology, University of Würzburg, Würzburg, Germany
- Institute of Voice and Performing Arts, University of Music and Performing Arts Munich, Munich, Germany
| |
Collapse
|
14
|
Mileva M, Tompkinson J, Watt D, Burton AM. The Role of Face and Voice Cues in Predicting the Outcome of Student Representative Elections. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2019; 46:617-625. [PMID: 31409219 DOI: 10.1177/0146167219867965] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
First impressions formed after seeing someone's face or hearing their voice can affect many social decisions, including voting in political elections. Despite the many studies investigating the independent contribution of face and voice cues to electoral success, their integration is still not well understood. Here, we examine a novel electoral context, student representative ballots, allowing us to test the generalizability of previous studies. We also examine the independent contributions of visual, auditory, and audiovisual information to social judgments of the candidates, and their relationship to election outcomes. Results showed that perceived trustworthiness was the only trait significantly related to election success. These findings contrast with previous reports on the importance of perceived competence using audio or visual cues only in the context of national political elections. The present study highlights the role of real-world context and emphasizes the importance of using ecologically valid stimulus presentation in understanding real-life social judgment.
Collapse
|
15
|
Abstract
Visual cues facilitate speech perception during face-to-face communication, particularly in noisy environments. These visual-driven enhancements arise from both automatic lip-reading behaviors and attentional tuning to auditory-visual signals. However, in crowded settings, such as a cocktail party, how do we accurately bind the correct voice to the correct face, enabling the benefit of visual cues on speech perception processes? Previous research has emphasized that spatial and temporal alignment of the auditory-visual signals determines which voice is integrated with which speaking face. Here, we present a novel illusion demonstrating that when multiple faces and voices are presented in the presence of ambiguous temporal and spatial information as to which pairs of auditory-visual signals should be integrated, our perceptual system relies on identity information extracted from each signal to determine pairings. Data from three experiments demonstrate that expectations about an individual’s voice (based on their identity) can change where individuals perceive that voice to arise from.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
16
|
Sorokowska A, Oleszkiewicz A. Body-odor based assessments of sex and personality - Non-significant differences between blind and sighted odor raters. Physiol Behav 2019; 210:112573. [PMID: 31248615 DOI: 10.1016/j.physbeh.2019.112573] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 06/05/2019] [Accepted: 06/05/2019] [Indexed: 12/17/2022]
Abstract
People exhibit different sensitivity to the signaling properties of body odors in the social context. Here, we aimed to investigate whether visual status modulates sensitivity to socially-relevant cues carried by body odors and whether it affects psychophysical ratings of such smells. We compared abilities of 19 early-blind, 9 late-blind and 13 sighted people to accurately assess sex, neuroticism and dominance of odor donors based on body odor samples. We showed that early blind, late blind and sighted people do not differ in accuracy of sex and personality assessments based on body odor samples. Additionally, the three participating groups perceived the presented body odor samples as similarly intense, pleasant and attractive. We discuss our findings in the context of interpersonal olfactory communication and olfactory compensation.
Collapse
Affiliation(s)
- Agnieszka Sorokowska
- Smell and Taste Research Lab, Institute of Psychology, University of Wroclaw, pl. Dawida 1, 50-527 Wroclaw, Poland; Department of Psychotherapy and Psychosomatic Medicine, TU Dresden, Fetscherstr. 74, 01307 Dresden, Germany.
| | - Anna Oleszkiewicz
- Smell and Taste Research Lab, Institute of Psychology, University of Wroclaw, pl. Dawida 1, 50-527 Wroclaw, Poland; Smell & Taste Clinic, Department of Otorhinolaryngology, TU Dresden, Fetscherstr. 74, 01307 Dresden, Germany
| |
Collapse
|
17
|
Hou J, Ye Z. Sex Differences in Facial and Vocal Attractiveness Among College Students in China. Front Psychol 2019; 10:1166. [PMID: 31178792 PMCID: PMC6538682 DOI: 10.3389/fpsyg.2019.01166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Accepted: 05/02/2019] [Indexed: 12/02/2022] Open
Abstract
This study aims to investigate sex differences in ratings for facial attractiveness (FA) and vocal attractiveness (VA). Participants (60 undergraduates in Study 1 and 111 undergraduates in Study 2) rated the attractiveness of computerized face images and voice recordings of men and women. In Study 1, face images and voice recordings were presented separately. Results indicated that men generally rated voice recordings of women more attractive than those of men, whereas women did not show different attractiveness ratings for voices of men vs. women. In Study 2, face images and voice recordings were paired as multimodal stimuli and presented simultaneously. Results indicated that men rated multimodal stimuli of women as more attractive than those of men, whereas women did not differentiate multimodal stimuli of men vs. women. We found that, compared to VA, FA had a stronger influence on participants' overall evaluations. Finally, we tested the difference between "original multimodal stimuli" (OMS) and "non-original multimodal stimuli" (non-OMS) and found the "OMS-facilitating effect." Taken together, findings indicated some sex differences in FA and VA in the current study, which could be used to interpret behaviors of sexual selection, human mate preferences, and designs and popularization of sex robots.
Collapse
Affiliation(s)
| | - Zi Ye
- Department of Philosophy, Anhui University, Hefei, China
| |
Collapse
|
18
|
Pereira KJ, Varella MAC, Kleisner K, Pavlovič O, Valentova JV. Positive association between facial and vocal femininity/masculinity in women but not in men. Behav Processes 2019; 164:25-29. [PMID: 31002841 DOI: 10.1016/j.beproc.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 03/12/2019] [Accepted: 04/15/2019] [Indexed: 12/30/2022]
Abstract
Multicomponent stimuli improve information reception. In women, perceived facial and vocal femininity-masculinity (FM) are concordant; however, mixed results are found for men. Some feminine and masculine traits are related to sex hormone action and can indicate reproductive qualities. However, most of the current research about human mate choice focuses on isolated indicators, especially visual assessment of faces. We therefore examined the cross-modal concordance hypothesis by testing correlations between perceptions of FM based on facial, vocal, and behavioral stimuli. Standardized facial pictures, vocal recordings and dance videos of 38 men and 41 women, aged 18-35 years, were rated by 21 male and 43 female students, aged 18-35 years, on 100-point scale (0 = very feminine; 100 = very masculine). All participants were Brazilian students from University of Sao Paulo. In women, facial and vocal FM correlated positively, suggesting concordant information about mate quality. Such results were not found in men, indicating multiple messages, which agree with women's multifaceted preference for male FM. In both sexes, FM of dance did not correlate with voices or faces, indicating different information and distinct process of development. We thus partially supported the cross-modal concordance hypothesis.
Collapse
Affiliation(s)
- Kamila Janaina Pereira
- Department of Experimental Psychology, Institute of Psychology, University of Sao Paulo, Sao Paulo, Brazil.
| | | | - Karel Kleisner
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Prague, Czech Republic
| | - Ondřej Pavlovič
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Prague, Czech Republic
| | | |
Collapse
|
19
|
Jesse A, Bartoli M. Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition 2018; 176:195-208. [DOI: 10.1016/j.cognition.2018.03.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 03/13/2018] [Accepted: 03/21/2018] [Indexed: 11/25/2022]
|
20
|
Sorokowska A, Oleszkiewicz A, Sorokowski P. A Compensatory Effect on Mate Selection? Importance of Auditory, Olfactory, and Tactile Cues in Partner Choice among Blind and Sighted Individuals. ARCHIVES OF SEXUAL BEHAVIOR 2018; 47:597-603. [PMID: 29396613 PMCID: PMC5834579 DOI: 10.1007/s10508-018-1156-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 01/15/2018] [Accepted: 01/17/2018] [Indexed: 05/30/2023]
Abstract
Human attractiveness is a potent social variable, and people assess their potential partners based on input from a range of sensory modalities. Among all sensory cues, visual signals are typically considered to be the most important and most salient source of information. However, it remains unclear how people without sight assess others. In the current study, we explored the relative importance of sensory modalities other than vision (smell, touch, and audition) in the assessment of same- and opposite-sex strangers. We specifically focused on possible sensory compensation in mate selection, defined as enhanced importance of modalities other than vision among blind individuals in their choice of potential partners. Data were obtained from a total of 119 participants, of whom 78 were blind people aged between 16 and 65 years (M = 42.4, SD = 12.6; 38 females) and a control sample of 41 sighted people aged between 20 and 64. As hypothesized, we observed a compensatory effect of blindness on auditory perception. Our data indicate that visual impairment increases the importance of audition in different types of social assessments for both sexes and in mate choice for blind men.
Collapse
Affiliation(s)
- Agnieszka Sorokowska
- Smell and Taste Clinic, Department of Otorhinolaryngology, TU Dresden, Fetscherstr. 74, Haus 5, Keller, 01307, Dresden, Germany.
- Institute of Psychology, University of Wroclaw, Wrocław, Poland.
| | - Anna Oleszkiewicz
- Smell and Taste Clinic, Department of Otorhinolaryngology, TU Dresden, Fetscherstr. 74, Haus 5, Keller, 01307, Dresden, Germany
- Institute of Psychology, University of Wroclaw, Wrocław, Poland
| | | |
Collapse
|
21
|
Bovet J, Barkat-Defradas M, Durand V, Faurie C, Raymond M. Women's attractiveness is linked to expected age at menopause. J Evol Biol 2017; 31:229-238. [PMID: 29178517 DOI: 10.1111/jeb.13214] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Revised: 11/20/2017] [Accepted: 11/21/2017] [Indexed: 01/02/2023]
Abstract
A great number of studies have shown that features linked to immediate fertility explain a large part of the variance in female attractiveness. This is consistent with an evolutionary perspective, as men are expected to prefer females at the age at which fertility peaks (at least for short-term relationships) in order to increase their reproductive success. However, for long-term relationships, a high residual reproductive value (the expected future reproductive output, linked to age at menopause) becomes relevant as well. In that case, young age and late menopause are expected to be preferred by men. However, the extent to which facial features provide cues to the likely age at menopause has never been investigated so far. Here, we show that expected age at menopause is linked to facial attractiveness of young women. As age at menopause is heritable, we used the mother's age at menopause as a proxy for her daughter's expected age of menopause. We found that men judged faces of women with a later expected age at menopause as more attractive than those of women with an earlier expected age at menopause. This result holds when age, cues of immediate fertility and facial ageing were controlled for. Additionally, we found that the expected age at menopause was not correlated with any of the other variables considered (including immediate fertility cues and facial ageing). Our results show the existence of a new correlate of women's facial attractiveness, expected age at menopause, which is independent of immediate fertility cues and facial ageing.
Collapse
Affiliation(s)
- J Bovet
- Institute for Advanced Study in Toulouse, Toulouse, France
| | - M Barkat-Defradas
- Institut des sciences de l'évolution de Montpellier, CNRS, UMR 5554 - IRD - EPHE- Université de Montpellier, Montpellier, France
| | - V Durand
- Institut des sciences de l'évolution de Montpellier, CNRS, UMR 5554 - IRD - EPHE- Université de Montpellier, Montpellier, France
| | - C Faurie
- Institut des sciences de l'évolution de Montpellier, CNRS, UMR 5554 - IRD - EPHE- Université de Montpellier, Montpellier, France
| | - M Raymond
- Institut des sciences de l'évolution de Montpellier, CNRS, UMR 5554 - IRD - EPHE- Université de Montpellier, Montpellier, France
| |
Collapse
|
22
|
Smith KM, Olkhov YM, Puts DA, Apicella CL. Hadza Men With Lower Voice Pitch Have a Better Hunting Reputation. EVOLUTIONARY PSYCHOLOGY 2017; 15:1474704917740466. [PMID: 29179581 PMCID: PMC10481060 DOI: 10.1177/1474704917740466] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 10/03/2017] [Indexed: 12/18/2022] Open
Abstract
Previous research with hunter-gatherers has found that women perceive men with voices manipulated to be lower in pitch to be better hunters, and men perceive women with lower pitch to be better gatherers. Here, we test if actual voice pitch is associated with hunting and gathering reputations in men and women, respectively. We find that voice pitch does relate to foraging reputation in men, but not in women, with better hunters having a lower voice pitch. In addition, we find that the previously documented relationship between voice pitch and reproductive success no longer holds when controlling for hunting reputation, but hunting reputation remains a significant predictor of reproductive success when controlling for voice pitch. This raises the possibility that voice pitch is being selected for in hunter-gatherers because of the relationship between voice pitch and hunting reputation.
Collapse
Affiliation(s)
| | | | - David A. Puts
- Pennsylvania State University, University Park, PA, USA
| | | |
Collapse
|
23
|
Fasoli F, Maass A, Sulpizio S. Stereotypical Disease Inferences From Gay/Lesbian Versus Heterosexual Voices. JOURNAL OF HOMOSEXUALITY 2017; 65:990-1014. [PMID: 28841093 DOI: 10.1080/00918369.2017.1364945] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Voice is a cue used to categorize speakers as members of social groups, including sexual orientation. We investigate the consequences of such voice-based categorization, showing that people infer stereotype-congruent disease likelihood on the basis of vocal information and without explicit information about the speaker's sexual orientation. Study 1 and Study 2 reveal that participants attribute diseases to gay/lesbian and heterosexual men and women in line with stereotypes. Gay speakers were more likely to be associated with gay and female diseases, and lesbian speakers with male diseases. These findings demonstrate that likelihood to suffer from diseases is erroneously, but stereotypically, inferred from targets' vocal information.
Collapse
Affiliation(s)
- Fabio Fasoli
- a School of Psychology , University of Surrey , Guildford , UK
| | - Anne Maass
- b Department of Developmental Psychology and Socialization , University of Padua , Padova , Italy
| | - Simone Sulpizio
- c Faculty of Psychology , Vita-Salute San Raffaele University , Milan , Italy
| |
Collapse
|
24
|
Maguinness C, von Kriegstein K. Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1313347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Corrina Maguinness
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
25
|
Oleszkiewicz A, Pisanski K, Sorokowska A. Does blindness influence trust? A comparative study on social trust among blind and sighted adults. PERSONALITY AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.paid.2017.02.031] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
Han C, Kandrik M, Hahn AC, Fisher CI, Feinberg DR, Holzleitner IJ, DeBruine LM, Jones BC. Interrelationships Among Men’s Threat Potential, Facial Dominance, and Vocal Dominance. EVOLUTIONARY PSYCHOLOGY 2017; 15:1474704917697332. [DOI: 10.1177/1474704917697332] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The benefits of minimizing the costs of engaging in violent conflict are thought to have shaped adaptations for the rapid assessment of others’ capacity to inflict physical harm. Although studies have suggested that men’s faces and voices both contain information about their threat potential, one recent study suggested that men’s faces are a more valid cue of their threat potential than their voices are. Consequently, the current study investigated the interrelationships among a composite measure of men’s actual threat potential (derived from the measures of their upper-body strength, height, and weight) and composite measures of these men’s perceived facial and vocal threat potential (derived from dominance, strength, and weight ratings of their faces and voices, respectively). Although men’s perceived facial and vocal threat potential were positively correlated, men’s actual threat potential was related to their perceived facial, but not vocal, threat potential. These results present new evidence that men’s faces may be a more valid cue of these aspects of threat potential than their voices are.
Collapse
Affiliation(s)
- Chengyang Han
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - Michal Kandrik
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - Amanda C. Hahn
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
- Department of Psychology, Humboldt State University, Arcata, CA, USA
| | - Claire I. Fisher
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - David R. Feinberg
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Iris J. Holzleitner
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - Lisa M. DeBruine
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - Benedict C. Jones
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK
| |
Collapse
|
27
|
Bülthoff I, Newell FN. Crossmodal priming of unfamiliar faces supports early interactions between voices and faces in person perception. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1290729] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| |
Collapse
|
28
|
Stevenage SV, Hamlin I, Ford B. Distinctiveness helps when matching static faces and voices. JOURNAL OF COGNITIVE PSYCHOLOGY 2016. [DOI: 10.1080/20445911.2016.1272605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Sarah V. Stevenage
- Department of Psychology, University of Southampton, Highfield, Southampton, UK
| | - Iain Hamlin
- Department of Psychology, University of Southampton, Highfield, Southampton, UK
| | - Becky Ford
- Department of Psychology, University of Southampton, Highfield, Southampton, UK
| |
Collapse
|
29
|
Valentova JV, Varella MAC, Havlíček J, Kleisner K. Positive association between vocal and facial attractiveness in women but not in men: A cross-cultural study. Behav Processes 2016; 135:95-100. [PMID: 27986472 DOI: 10.1016/j.beproc.2016.12.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Revised: 10/31/2016] [Accepted: 12/08/2016] [Indexed: 12/26/2022]
Abstract
Various species use multiple sensory modalities in the communication processes. In humans, female facial appearance and vocal display are correlated and it has been suggested that they serve as redundant markers indicating the bearer's reproductive potential and/or residual fertility. In men, evidence for redundancy of facial and vocal attractiveness is ambiguous. We tested the redundancy/multiple signals hypothesis by correlating perceived facial and vocal attractiveness in men and women from two different populations, Brazil and the Czech Republic. We also investigated whether facial and vocal attractiveness are linked to facial morphology. Standardized facial pictures and vocal samples of 86 women (47 from Brazil) and 81 men (41 from Brazil), aged 18-35, were rated for attractiveness by opposite-sex raters. Facial and vocal attractiveness were found to positively correlate in women but not in men. We further applied geometric morphometrics and regressed facial shape coordinates on facial and vocal attractiveness ratings. In women, facial shape was linked to their facial attractiveness but there was no association between facial shape and vocal attractiveness. In men, none of these associations was significant. Having shown that women with more attractive faces possess also more attractive voices, we thus only partly supported the redundant signal hypothesis.
Collapse
Affiliation(s)
| | | | - Jan Havlíček
- Department of Zoology, Faculty of Science, Charles University, Prague, Czechia
| | - Karel Kleisner
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Prague, Czechia
| |
Collapse
|
30
|
Smith HMJ, Dunn AK, Baguley T, Stacey PC. The effect of inserting an inter-stimulus interval in face-voice matching tasks. Q J Exp Psychol (Hove) 2016; 71:424-434. [PMID: 27784196 DOI: 10.1080/17470218.2016.1253758] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Voices and static faces can be matched for identity above chance level. No previous face-voice matching experiments have included an inter-stimulus interval (ISI) exceeding 1 s. We tested whether accurate identity decisions rely on high-quality perceptual representations temporarily stored in sensory memory, and therefore whether the ability to make accurate matching decisions diminishes as the ISI increases. In each trial, participants had to decide whether an unfamiliar face and voice belonged to the same person. The face and voice stimuli were presented simultaneously in Experiment 1, and there was a 5-s ISI in Experiment 2, and a 10-s interval in Experiment 3. The results, analysed using multilevel modelling, revealed that static face-voice matching was significantly above chance level only when the stimuli were presented simultaneously (Experiment 1). The overall bias to respond same identity weakened as the interval increased, suggesting that this bias is explained by temporal contiguity. Taken together, the findings highlight that face-voice matching performance is reliant on comparing fast-decaying, high-quality perceptual representations. The results are discussed in terms of social functioning.
Collapse
Affiliation(s)
| | - Andrew K Dunn
- Psychology Division, Nottingham Trent University, Nottingham, UK
| | - Thom Baguley
- Psychology Division, Nottingham Trent University, Nottingham, UK
| | - Paula C Stacey
- Psychology Division, Nottingham Trent University, Nottingham, UK
| |
Collapse
|