1
|
Cao Z, Wang Y, Wu L, Xie Y, Shi Z, Zhong Y, Wang Y. Reexamining the Kuleshov effect: Behavioral and neural evidence from authentic film experiments. PLoS One 2024; 19:e0308295. [PMID: 39102395 PMCID: PMC11299807 DOI: 10.1371/journal.pone.0308295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 07/20/2024] [Indexed: 08/07/2024] Open
Abstract
Film cognition explores the influence of cinematic elements, such as editing and film color, on viewers' perception. The Kuleshov effect, a famous example of how editing influences viewers' emotional perception, was initially proposed to support montage theory through the Kuleshov experiment. This effect, which has since been recognized as a manifestation of point-of-view (POV) editing practices, posits that the emotional interpretation of neutral facial expressions is influenced by the accompanying emotional scene in a face-scene-face sequence. However, concerns persist regarding the validity of previous studies, often employing inauthentic film materials like static images, leaving the question of its existence in authentic films unanswered. This study addresses these concerns by utilizing authentic films in two experiments. In Experiment 1, multiple film clips were captured under the guidance of a professional film director and seamlessly integrated into authentic film sequences. 59 participants viewed these face-scene-face film sequences and were tasked with rating the valence and emotional intensity of neutral faces. The findings revealed that the accompanying fearful or happy scenes significantly influence the interpretation of emotion on neutral faces, eliciting perceptions of negative or positive emotions from the neutral face. These results affirm the existence of the Kuleshov effect within authentic films. In Experiment 2, 31 participants rated the valence and arousal of neutral faces while undergoing functional magnetic resonance imaging (fMRI). The behavioral results confirm the Kuleshov effect in the MRI scanner, while the neural data identify neural correlates that support its existence at the neural level. These correlates include the cuneus, precuneus, hippocampus, parahippocampal gyrus, post cingulate gyrus, orbitofrontal cortex, fusiform gyrus, and insula. These findings also underscore the contextual framing inherent in the Kuleshov effect. Overall, the study integrates film theory and cognitive neuroscience experiments, providing robust evidence supporting the existence of the Kuleshov effect through both subjective ratings and objective neuroimaging measurements. This research also contributes to a deeper understanding of the impact of film editing on viewers' emotional perception from the contemporary POV editing practices and neurocinematic perspective, advancing the knowledge of film cognition.
Collapse
Affiliation(s)
- Zhengcao Cao
- School of Arts and Communication, Beijing Normal University, Beijing, China
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yashu Wang
- School of Arts and Communication, Beijing Normal University, Beijing, China
| | - Liangyu Wu
- School of Arts and Communication, Beijing Normal University, Beijing, China
| | - Yapei Xie
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Zhichen Shi
- School of Arts and Communication, Beijing Normal University, Beijing, China
| | - Yiren Zhong
- School of Arts and Communication, Beijing Normal University, Beijing, China
| | - Yiwen Wang
- School of Arts and Communication, Beijing Normal University, Beijing, China
| |
Collapse
|
2
|
Goel S, Jara-Ettinger J, Ong DC, Gendron M. Face and context integration in emotion inference is limited and variable across categories and individuals. Nat Commun 2024; 15:2443. [PMID: 38499519 PMCID: PMC10948792 DOI: 10.1038/s41467-024-46670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/05/2024] [Indexed: 03/20/2024] Open
Abstract
The ability to make nuanced inferences about other people's emotional states is central to social functioning. While emotion inferences can be sensitive to both facial movements and the situational context that they occur in, relatively little is understood about when these two sources of information are integrated across emotion categories and individuals. In a series of studies, we use one archival and five empirical datasets to demonstrate that people could be integrating, but that emotion inferences are just as well (and sometimes better) captured by knowledge of the situation alone, while isolated facial cues are insufficient. Further, people integrate facial cues more for categories for which they most frequently encounter facial expressions in everyday life (e.g., happiness). People are also moderately stable over time in their reliance on situational cues and integration of cues and those who reliably utilize situation cues more also have better situated emotion knowledge. These findings underscore the importance of studying variability in reliance on and integration of cues.
Collapse
Affiliation(s)
- Srishti Goel
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA
- Wu Tsai Institute, Yale University, 100 College St, New Haven, CT, USA
| | - Desmond C Ong
- Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton St, Austin, TX, USA
| | - Maria Gendron
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| |
Collapse
|
3
|
Qiao-Tasserit E, Corradi-Dell’Acqua C, Vuilleumier P. Influence of transient emotional episodes on affective and cognitive theory of mind. Soc Cogn Affect Neurosci 2024; 19:nsae016. [PMID: 38442706 PMCID: PMC10914405 DOI: 10.1093/scan/nsae016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 12/20/2023] [Accepted: 02/21/2024] [Indexed: 03/07/2024] Open
Abstract
Our emotions may influence how we interact with others. Previous studies have shown an important role of emotion induction in generating empathic reactions towards others' affect. However, it remains unclear whether (and to which extent) our own emotions can influence the ability to infer people's mental states, a process associated with Theory of Mind (ToM) and implicated in the representation of both cognitive (e.g. beliefs and intentions) and affective conditions. We engaged 59 participants in two emotion-induction experiments where they saw joyful, neutral and fearful clips. Subsequently, they were asked to infer other individuals' joy, fear (affective ToM) or beliefs (cognitive ToM) from verbal scenarios. Using functional magnetic resonance imaging, we found that brain activity in the superior temporal gyrus, precuneus and sensorimotor cortices were modulated by the preceding emotional induction, with lower response when the to-be-inferred emotion was incongruent with the one induced in the observer (affective ToM). Instead, we found no effect of emotion induction on the appraisal of people's beliefs (cognitive ToM). These findings are consistent with embodied accounts of affective ToM, whereby our own emotions alter the engagement of key brain regions for social cognition, depending on the compatibility between one's own and others' affect.
Collapse
Affiliation(s)
- Emilie Qiao-Tasserit
- Laboratory of Behavioural Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, University of Geneva, Geneva CH-1206, Switzerland
- Geneva Neuroscience Center, University of Geneva, Geneva CH-1206, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Geneva CH-1209, Switzerland
| | - Corrado Corradi-Dell’Acqua
- Geneva Neuroscience Center, University of Geneva, Geneva CH-1206, Switzerland
- Theory of Pain Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences (FPSE), University of Geneva, Geneva CH-1211, Switzerland
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto IT-38068, Italy
| | - Patrik Vuilleumier
- Laboratory of Behavioural Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, University of Geneva, Geneva CH-1206, Switzerland
- Geneva Neuroscience Center, University of Geneva, Geneva CH-1206, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Geneva CH-1209, Switzerland
| |
Collapse
|
4
|
Drew A, Soto-Faraco S. Perceptual oddities: assessing the relationship between film editing and prediction processes. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220426. [PMID: 38104604 PMCID: PMC10725757 DOI: 10.1098/rstb.2022.0426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 10/16/2023] [Indexed: 12/19/2023] Open
Abstract
During film viewing, humans parse sequences of individual shots into larger narrative structures, often weaving transitions at edit points into an apparently seamless and continuous flow. Editing helps filmmakers manipulate visual transitions to induce feelings of fluency/disfluency, tension/relief, curiosity, expectation and several emotional responses. We propose that the perceptual dynamics induced by film editing can be captured by a predictive processing (PP) framework. We hypothesise that visual discontinuities at edit points produce discrepancies between anticipated and actual sensory input, leading to prediction error. Further, we propose that the magnitude of prediction error depends on the predictability of each shot within the narrative flow, and lay out an account based on conflict monitoring. We test this hypothesis in two empirical studies measuring electroencephalography (EEG) during passive viewing of film excerpts, as well as behavioural responses during an active edit detection task. We report the neural and behavioural modulations at editing boundaries across three levels of narrative depth, showing greater modulations for edits spanning less predictable, deeper narrative transitions. Overall, our contribution lays the groundwork for understanding film editing from a PP perspective. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectivess'.
Collapse
Affiliation(s)
- Alice Drew
- Multisensory Research Group, Centre for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain
| | - Salvador Soto-Faraco
- Multisensory Research Group, Centre for Brain and Cognition, Universitat Pompeu Fabra, Carrer de Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona, Spain
| |
Collapse
|
5
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
6
|
Vaioli G, Bastoni I, Villa V, Mendolicchio L, Castelnuovo G, Mauro A, Scarpina F. "I cannot see your fear!" Altered recognition of fearful facial expressions in anorexia nervosa. Front Psychol 2023; 14:1280719. [PMID: 38125860 PMCID: PMC10732310 DOI: 10.3389/fpsyg.2023.1280719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023] Open
Abstract
Background The evidence about facial emotion recognition in anorexia nervosa as the role of alexithymic traits on this emotional ability is conflicting and heterogeneous. Objective We assessed the capability of recognizing facial expressions of two primary emotions, fear, and anger, in the context of anorexia nervosa. Methods Women affected by anorexia nervosa were compared with healthy weight women in a well-established implicit facial emotion recognition task. Both reaction time and level of accuracy were computed. Moreover, the individual levels of alexithymia were assessed through a standard self-report questionnaire. Results Participants with anorexia nervosa reported a significantly lower performance in terms of reaction time and accuracy when the emotion of fear-but not anger-was the target. Notably, such an alteration was linked to the levels of alexithymia reported in the self-report questionnaire. Conclusion In anorexia nervosa, difficulties in processing facial fearful (but not angry) expressions may be observed as linked to higher expressions of alexithymic traits. We suggested future research in which emotional processing will be investigated taking into account the role of the bodily dimensions of emotional awareness.
Collapse
Affiliation(s)
- Giulia Vaioli
- I.R.C.C.S. Istituto Auxologico Italiano, U.O. di Neurologia e Neuroriabilitazione, Ospedale San Giuseppe, Piancavallo, Italy
| | - Ilaria Bastoni
- I.R.C.C.S. Istituto Auxologico Italiano, Laboratorio di Psicologia, Ospedale San Giuseppe, Piancavallo, Italy
| | - Valentina Villa
- I.R.C.C.S. Istituto Auxologico Italiano, Laboratorio di Psicologia, Ospedale San Giuseppe, Piancavallo, Italy
| | - Leonardo Mendolicchio
- I.R.C.C.S. Istituto Auxologico Italiano, U.O. dei Disturbi del Comportamento Alimentare, Ospedale San Giuseppe, Piancavallo, Italy
| | - Gianluca Castelnuovo
- I.R.C.C.S. Istituto Auxologico Italiano, Laboratorio di Psicologia, Ospedale San Giuseppe, Piancavallo, Italy
- Psychology Department, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Alessandro Mauro
- I.R.C.C.S. Istituto Auxologico Italiano, U.O. di Neurologia e Neuroriabilitazione, Ospedale San Giuseppe, Piancavallo, Italy
- “Rita Levi Montalcini” Department of Neurosciences, University of Turin, Turin, Italy
| | - Federica Scarpina
- I.R.C.C.S. Istituto Auxologico Italiano, U.O. di Neurologia e Neuroriabilitazione, Ospedale San Giuseppe, Piancavallo, Italy
- “Rita Levi Montalcini” Department of Neurosciences, University of Turin, Turin, Italy
| |
Collapse
|
7
|
Kappas A, Gratch J. These Aren't The Droids You Are Looking for: Promises and Challenges for the Intersection of Affective Science and Robotics/AI. AFFECTIVE SCIENCE 2023; 4:580-585. [PMID: 37744970 PMCID: PMC10514249 DOI: 10.1007/s42761-023-00211-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/19/2023] [Indexed: 09/26/2023]
Abstract
AI research focused on interactions with humans, particularly in the form of robots or virtual agents, has expanded in the last two decades to include concepts related to affective processes. Affective computing is an emerging field that deals with issues such as how the diagnosis of affective states of users can be used to improve such interactions, also with a view to demonstrate affective behavior towards the user. This type of research often is based on two beliefs: (1) artificial emotional intelligence will improve human computer interaction (or more specifically human robot interaction), and (2) we understand the role of affective behavior in human interaction sufficiently to tell artificial systems what to do. However, within affective science the focus of research is often to test a particular assumption, such as "smiles affect liking." Such focus does not provide the information necessary to synthesize affective behavior in long dynamic and real-time interactions. In consequence, theories do not play a large role in the development of artificial affective systems by engineers, but self-learning systems develop their behavior out of large corpora of recorded interactions. The status quo is characterized by measurement issues, theoretical lacunae regarding prevalence and functions of affective behavior in interaction, and underpowered studies that cannot provide the solid empirical foundation for further theoretical developments. This contribution will highlight some of these challenges and point towards next steps to create a rapprochement between engineers and affective scientists with a view to improving theory and solid applications.
Collapse
Affiliation(s)
- Arvid Kappas
- Constructor University, Campus Ring 1, 28759 Bremen, Germany
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
8
|
Sanz-Aznar J, Bruni LE, Soto-Faraco S. Cinematographic continuity edits across shot scales and camera angles: an ERP analysis. Front Neurosci 2023; 17:1173704. [PMID: 37521689 PMCID: PMC10375706 DOI: 10.3389/fnins.2023.1173704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 06/27/2023] [Indexed: 08/01/2023] Open
Abstract
Film editing has attracted great theoretical and practical interest since the beginnings of cinematography. In recent times, the neural correlates of visual transitions at edit cuts have been at the focus of attention in neurocinematics. Many Event Related Potential (ERP) studies studies have reported the consequences of cuts involving narrative discontinuities, and violations of standard montage rules. However, less is known about edits that are meant to induce continuity. Here, we addressed the neural correlates of continuity editing involving scale, and angle variations across the cut within the same scene, two of the most popular devices used for continuity editing. We recorded the electroencephalographic signal obtained from 20 viewers as they watched four different cinematographic excerpts to extract ERPs at edit points. First, we were able to reproduce the general time and scalp distribution of the typical ERPs to filmic cuts in prior studies. Second, we found significant ERP modulations triggered by scale changes (scale out, scale in, or maintaining the same scale). Edits involving an increase in scale (scale out) led to amplification of the ERP deflection, and scale reduction (scale in) led to decreases, compared to edits that kept scale across the cut. These modulations coincide with the time window of the N300 and N400 components and, according to previous findings, their amplitude has been associated with the likelihood of consciously detecting the edit. Third, we did not detect similar modulations as a function of angle variations across the cut. Based on these findings, we suggest that cuts involving reduction of scale are more likely to go unnoticed, than ones that scale out. This relationship between scale in/out and visibility is documented in film edition manuals. Specifically, in order to achieve fluidity in a scene, the edition is designed from the most opened shots to the most closed ones.
Collapse
Affiliation(s)
- Javier Sanz-Aznar
- Section of Communication, Department of Hispanic Studies, Literary Theory and Communication, University of Barcelona, Barcelona, Spain
| | - Luis Emilio Bruni
- Augmented Cognition Lab, Section for Media Technology, Department of Architecture, Design and Media Technology, The Technical Faculty of IT and Design, Aalborg University, Copenhagen, Denmark
| | - Salvador Soto-Faraco
- Multisensory Research Group, The Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
9
|
Irwantoro K, Nimsha Nilakshi Lennon N, Mareschal I, Miflah Hussain Ismail A. Contextualising facial expressions: The effect of temporal context and individual differences on classification. Q J Exp Psychol (Hove) 2023; 76:450-459. [PMID: 35360991 PMCID: PMC9896254 DOI: 10.1177/17470218221094296] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
Collapse
Affiliation(s)
- Kinenoita Irwantoro
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia,Kinenoita Irwantoro, School of Psychology, University of Nottingham Malaysia, 43500 Semenyih, Selangor, Malaysia.
| | | | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | | |
Collapse
|
10
|
Hashimoto Y, Nakata H. Performance-environment mutual flow model using big data on baseball pitchers. Front Sports Act Living 2022; 4:967088. [DOI: 10.3389/fspor.2022.967088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 11/03/2022] [Indexed: 11/21/2022] Open
Abstract
IntroductionThe study investigated the baseball pitching performance in terms of release speed, spin rate, and 3D coordinate data of the release point depending on the ball and strike counts.MethodsWe used open data provided on the official website of Major League Baseball (MLB), which included data related to 580 pitchers who pitched in the MLB between 2015 and 2019.ResultsThe results show that a higher ball count corresponds to a slower release speed and decreased spin rate, and a higher strike count corresponds to a faster release speed and increased spin rate. For a higher ball count, the pitcher's release point tended to be lower and more forward, while for a higher strike count, the pitcher's release point tended to be to the left from the right pitcher's point of view. This result was more pronounced in 4-seam pitches, which consisted the largest number of pitchers. The same tendency was confirmed in other pitches such as sinker, slider, cut ball, and curve.DiscussionOur findings suggest that the ball count is associated with the pitcher's release speed, spin rate, and 3D coordinate data. From a different perspective, as the pitcher's pitching performance is associated with the ball and strike count, the ball and strike count is associated with pitching performance. With regard to the aforementioned factor, we propose a “performance-environment flow model,” indicating that a player's performance changes according to the game situation, and the game situation consequently changes the player's next performance.
Collapse
|
11
|
Gourlay C, Collin P, D'Auteuil C, Jacques M, Caron PO, Scherzer PB. Age differences in social-cognitive abilities across the stages of adulthood and path model investigation of adult social cognition. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:1033-1067. [PMID: 34355998 DOI: 10.1080/13825585.2021.1962789] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 07/27/2021] [Indexed: 06/13/2023]
Abstract
Accumulating evidence points toward an association between older age and performance decrements in social cognition (SC). We explored age-related variations in four components of SC: emotion recognition, theory of mind, social judgment, and blame attributions. A total of 120 adults divided into three stages (18-34 years, 35-59 years, 60-85 years) completed a battery of SC. Between and within age-group differences in SC were investigated. Path analyses were used to identify relationships among the components. Emotion recognition and theory of mind showed differences beginning either in midlife, or after. Blame attributions and social judgment did not show a significant difference. However, social judgment varied significantly within groups. Path models revealed a relationship between emotion recognition and theory of mind. Findings highlight age-related differences in some components and a link between two components. Strategies promoting social functioning in aging might help to maintain or improve these abilities over time.
Collapse
Affiliation(s)
- Catherine Gourlay
- Département De Psychologie, Université Du Québec À Montréal, Montréal, Québec, Canada
| | - Pascal Collin
- Département De Psychologie, Université Du Québec À Montréal, Montréal, Québec, Canada
| | - Camille D'Auteuil
- Département De Psychologie, Université Du Québec À Montréal, Montréal, Québec, Canada
| | - Marie Jacques
- Département De Psychologie, Université Du Québec À Montréal, Montréal, Québec, Canada
| | | | - Peter B Scherzer
- Département De Psychologie, Université Du Québec À Montréal, Montréal, Québec, Canada
| |
Collapse
|
12
|
Abstract
AbstractEmotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of criminology, policing, surveillance and the study of emotional AI this paper explores and offers a framework of understanding the various issues that these technologies present particularly to liberal democracies. I argue that these technologies should not be deployed within public spaces because there is only a very weak evidence-base as to their effectiveness in a policing and security context, and even more importantly represent a major intrusion to people’s private lives and also represent a worrying extension of policing power because of the possibility that intentions and attitudes may be inferred. Further to this, the danger in the use of such invasive surveillance for the purpose of policing and crime prevention in urban spaces is that it potentially leads to a highly regulated and control-oriented society. I argue that emotion recognition has severe impacts on the right to the city by not only undertaking surveillance of existing situations but also making inferences and probabilistic predictions about future events as well as emotions and intentions.
Collapse
|
13
|
Duriez P, Guy-Rubin A, Kaya Lefèvre H, Gorwood P. Morphing analysis of facial emotion recognition in anorexia nervosa: association with physical activity. Eat Weight Disord 2022; 27:1053-1061. [PMID: 34213746 DOI: 10.1007/s40519-021-01254-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 06/17/2021] [Indexed: 11/30/2022] Open
Abstract
PURPOSE Anorexia Nervosa (AN) has been linked to emotion processing inefficiencies, social cognition difficulties and emotion dysregulation, but data on Facial Emotion Recognition (FER) are heterogenous and inconclusive. This study aims to explore FER in patients with AN using a dynamic and ecological evaluation, and its relationship with Physical Activity (PA), an important aspect of AN that could impact emotional processing. METHODS Sixty-six participants (33 patients with AN and 33 healthy controls) performed a morphed facial emotional recognition task and 49 of them wore an accelerometer during seven days to assess PA. Axis-I disorders and depressive symptoms have been assessed. RESULTS No difference was found regarding time to recognize facial emotions. However, patients with AN correctly recognize emotions more frequently than controls. This was specific to disgust, although there was also a tendency for sadness. Among patients, higher depressive scores are associated with a faster and more accurate recognition of disgust, while a higher level of PA is associated to decreased accuracy in recognizing sadness. CONCLUSION Patients with AN are capable of recognizing facial emotions as accurately as controls, but could have a higher sensitivity in recognizing negative emotions, especially disgust and sadness. PA has opposite effects and, thus, could be considered as an emotional regulation strategy against negative affect. LEVEL OF EVIDENCE II Controlled trial without randomization.
Collapse
Affiliation(s)
- Philibert Duriez
- GHU Paris Psychiatrie Et Neurosciences, Hôpital Sainte-Anne, CMME, 1 rue Cabanis, 75014, Paris, France.,Institute of Psychiatry and Neuroscience of Paris (IPNP), Université de Paris, INSERM U1266, 75014, Paris, France
| | - Aurore Guy-Rubin
- GHU Paris Psychiatrie Et Neurosciences, Hôpital Sainte-Anne, CMME, 1 rue Cabanis, 75014, Paris, France
| | - Héline Kaya Lefèvre
- GHU Paris Psychiatrie Et Neurosciences, Hôpital Sainte-Anne, CMME, 1 rue Cabanis, 75014, Paris, France. .,Université de Paris, LPPS, 92100, Boulogne-Billancourt, France.
| | - Philip Gorwood
- GHU Paris Psychiatrie Et Neurosciences, Hôpital Sainte-Anne, CMME, 1 rue Cabanis, 75014, Paris, France.,Institute of Psychiatry and Neuroscience of Paris (IPNP), Université de Paris, INSERM U1266, 75014, Paris, France
| |
Collapse
|
14
|
Nudelman MF, Portugal LCL, Mocaiber I, David IA, Rodolpho BS, Pereira MG, de Oliveira L. Long-Term Influence of Incidental Emotions on the Emotional Judgment of Neutral Faces. Front Psychol 2022; 12:772916. [PMID: 35069355 PMCID: PMC8773088 DOI: 10.3389/fpsyg.2021.772916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 11/22/2021] [Indexed: 11/29/2022] Open
Abstract
Background: Evidence indicates that the processing of facial stimuli may be influenced by incidental factors, and these influences are particularly powerful when facial expressions are ambiguous, such as neutral faces. However, limited research investigated whether emotional contextual information presented in a preceding and unrelated experiment could be pervasively carried over to another experiment to modulate neutral face processing. Objective: The present study aims to investigate whether an emotional text presented in a first experiment could generate negative emotion toward neutral faces in a second experiment unrelated to the previous experiment. Methods: Ninety-nine students (all women) were randomly assigned to read and evaluate a negative text (negative context) or a neutral text (neutral text) in the first experiment. In the subsequent second experiment, the participants performed the following two tasks: (1) an attentional task in which neutral faces were presented as distractors and (2) a task involving the emotional judgment of neutral faces. Results: The results show that compared to the neutral context, in the negative context, the participants rated more faces as negative. No significant result was found in the attentional task. Conclusion: Our study demonstrates that incidental emotional information available in a previous experiment can increase participants’ propensity to interpret neutral faces as more negative when emotional information is directly evaluated. Therefore, the present study adds important evidence to the literature suggesting that our behavior and actions are modulated by previous information in an incidental or low perceived way similar to what occurs in everyday life, thereby modulating our judgments and emotions.
Collapse
Affiliation(s)
- Marta F Nudelman
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil
| | - Liana C L Portugal
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil.,Department of Physiological Sciences, Biomedical Center, Roberto Alcantara Gomes Biology Institute, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Izabela Mocaiber
- Laboratory of Cognitive Psychophysiology, Department of Natural Sciences, Institute of Humanities and Health, Universidade Federal Fluminense, Rio das Ostras, Brazil
| | - Isabel A David
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil
| | - Beatriz S Rodolpho
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil
| | - Mirtes G Pereira
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil
| | - Leticia de Oliveira
- Laboratory of Neurophysiology of Behaviour, Department of Physiology and Pharmacology, Biomedical Institute, Universidade Federal Fluminense, Niterói, Brazil
| |
Collapse
|
15
|
Wang Y. To Be Expressive or Not: The Role of Teachers’ Emotions in Students’ Learning. Front Psychol 2022; 12:737310. [PMID: 35111095 PMCID: PMC8802995 DOI: 10.3389/fpsyg.2021.737310] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Understanding the role of teachers’ facial expressions in students’ learning is helpful to improve online teaching. Therefore, this study explored the effects of teacher’s facial expressions on students’ learning through analyzing three groups of video lectures. Participants were 78 students enrolled in three groups: one with an enhanced-expression teacher, one with a conventional-expression teacher, and one with the teacher’s audio only. ANOVA was used to explore whether video lectures instructed by the enhanced-expression teacher were better than those instructed by the conventional-expression teacher and the audio-only teacher for facilitating students’ learning, and what is the role of the teacher’s emotions in students’ perceived social presence, arousal level, cognitive load, and learning. The results showed that the video lecture by the enhanced-expression teacher was better than those with the conventional-expression teacher and with the audio-only for facilitating students’ social presence, arousal level, and long-term learning. Interestingly, it was found that the teacher’s emotions could relieve students’ cognitive load. These results explained the inconsistency of existing studies by exploring the mechanism of teachers’ emotions in students’ learning. It also provides teachers with practical guidance for video lecture design.
Collapse
Affiliation(s)
- Yang Wang
- *Correspondence: Yang Wang, ; orcid.org/0000-0002-4125-9436
| |
Collapse
|
16
|
Amerineni R, Gupta RS, Gupta L. CINET: A Brain-Inspired Deep Learning Context-Integrating Neural Network Model for Resolving Ambiguous Stimuli. Brain Sci 2020; 10:E64. [PMID: 31991649 PMCID: PMC7071366 DOI: 10.3390/brainsci10020064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 01/17/2020] [Accepted: 01/22/2020] [Indexed: 11/17/2022] Open
Abstract
The brain uses contextual information to uniquely resolve the interpretation of ambiguous stimuli. This paper introduces a deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments is designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain's ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of information in the visual cortex.
Collapse
Affiliation(s)
- Rajesh Amerineni
- Department of Electrical & Computer Engineering, Southern Illinois University, Carbondale, IL 62901, USA;
| | - Resh S. Gupta
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA;
| | - Lalit Gupta
- Department of Electrical & Computer Engineering, Southern Illinois University, Carbondale, IL 62901, USA;
| |
Collapse
|
17
|
Kaltwasser L, Rost N, Ardizzi M, Calbi M, Settembrino L, Fingerhut J, Pauen M, Gallese V. Sharing the filmic experience - The physiology of socio-emotional processes in the cinema. PLoS One 2019; 14:e0223259. [PMID: 31626656 PMCID: PMC6799930 DOI: 10.1371/journal.pone.0223259] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 09/17/2019] [Indexed: 11/18/2022] Open
Abstract
As we identify with characters on screen, we simulate their emotions and thoughts. This is accompanied by physiological changes such as galvanic skin response (GSR), an indicator for emotional arousal, and respiratory sinus arrhythmia (RSA), referring to vagal activity. We investigated whether the presence of a cinema audience affects these psychophysiological processes. The study was conducted in a real cinema in Berlin. Participants came twice to watch previously rated emotional film scenes eliciting amusement, anger, tenderness or fear. Once they watched the scenes alone, once in a group. We tested whether the vagal modulation in response to the mere presence of others influences explicit (reported) and implicit markers (RSA, heart rate (HR) and GSR) of emotional processes in function of solitary or collective enjoyment of movie scenes. On the physiological level, we found a mediating effect of vagal flexibility to the mere presence of others. Individuals showing a high baseline difference (alone vs. social) prior to the presentation of film, maintained higher RSA in the alone compared to the social condition. The opposite pattern emerged for low baseline difference individuals. Emotional arousal as reflected in GSR was significantly more pronounced during scenes eliciting anger independent of the social condition. On the behavioural level, we found evidence for emotion-specific effects on reported empathy, emotional intensity and Theory of Mind. Furthermore, people who decrease their RSA in response to others' company are those who felt themselves more empathically engaged with the characters. Our data speaks in favour of a specific role of vagal regulation in response to the mere presence of others in terms of explicit empathic engagement with characters during shared filmic experience.
Collapse
Affiliation(s)
- Laura Kaltwasser
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
- * E-mail:
| | - Nicolas Rost
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
| | - Marta Calbi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
| | - Luca Settembrino
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Joerg Fingerhut
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Michael Pauen
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
- Institut für Philosophie, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vittorio Gallese
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
- Department of Art History Columbia University, Italian Academy for Advanced Studies, Columbia University, New York, NY, United States of America
| |
Collapse
|
18
|
Barrett LF, Adolphs R, Marsella S, Martinez A, Pollak SD. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychol Sci Public Interest 2019; 20:1-68. [PMID: 31313636 PMCID: PMC6640856 DOI: 10.1177/1529100619832930] [Citation(s) in RCA: 384] [Impact Index Per Article: 76.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Collapse
Affiliation(s)
- Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, MA
- Massachusetts General Hospital, Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA
- Harvard Medical School, Department of Psychiatry, Boston MA
| | - Ralph Adolphs
- California Institute of Technology, Departments of Psychology, Neuroscience, and Biology,Pasadena, CA
| | - Stacy Marsella
- Northeastern University, Department of Psychology, Boston, MA
- Northeastern University, College of Computer and Information Science, Boston, MA
- University of Glasgow, Glasgow, Scotland
| | - Aleix Martinez
- The Ohio State University, Department of Electrical and Computer Engineering, and Center for Cognitive and Brain Sciences, Columbus, OH
| | - Seth D. Pollak
- University of Wisconsin - Madison, Department of Psychology, Madison, WI
| |
Collapse
|
19
|
Calbi M, Siri F, Heimann K, Barratt D, Gallese V, Kolesnikov A, Umiltà MA. How context influences the interpretation of facial expressions: a source localization high-density EEG study on the "Kuleshov effect". Sci Rep 2019; 9:2107. [PMID: 30765713 PMCID: PMC6376122 DOI: 10.1038/s41598-018-37786-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 12/12/2018] [Indexed: 11/24/2022] Open
Abstract
Few studies have explored the specificities of contextual modulations of the processing of facial expressions at a neuronal level. This study fills this gap by employing an original paradigm, based on a version of the filmic “Kuleshov effect”. High-density EEG was recorded while participants watched film sequences consisting of three shots: the close-up of a target person’s neutral face (Face_1), the scene that the target person was looking at (happy, fearful, or neutral), and another close-up of the same target person’s neutral face (Face_2). The participants’ task was to rate both valence and arousal, and subsequently to categorize the target person’s emotional state. The results indicate that despite a significant behavioural ‘context’ effect, the electrophysiological indexes still indicate that the face is evaluated as neutral. Specifically, Face_2 elicited a high amplitude N170 when preceded by neutral contexts, and a high amplitude Late Positive Potential (LPP) when preceded by emotional contexts, thus showing sensitivity to the evaluative congruence (N170) and incongruence (LPP) between context and Face_2. The LPP activity was mainly underpinned by brain regions involved in facial expressions and emotion recognition processing. Our results shed new light on temporal and neural correlates of context-sensitivity in the interpretation of facial expressions.
Collapse
Affiliation(s)
- Marta Calbi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy.
| | - Francesca Siri
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy
| | - Katrin Heimann
- Interacting Minds Center, University of Aarhus, Aarhus, Denmark
| | - Daniel Barratt
- Department of Management, Society and Communication, Copenhagen Business School, Copenhagen, Denmark
| | - Vittorio Gallese
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Parma, Italy. .,Institute of Philosophy, School of Advanced Study, University of London, London, UK.
| | - Anna Kolesnikov
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Parma, Italy
| | | |
Collapse
|
20
|
Ho PK, Woods A, Newell FN. Temporal shifts in eye gaze and facial expressions independently contribute to the perceived attractiveness of unfamiliar faces. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2018.1564807] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Pik Ki Ho
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | | | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
21
|
Cecchi AS. Cognitive penetration of early vision in face perception. Conscious Cogn 2018; 63:254-266. [PMID: 29909046 DOI: 10.1016/j.concog.2018.06.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Revised: 05/01/2018] [Accepted: 06/05/2018] [Indexed: 11/16/2022]
Abstract
Cognitive and affective penetration of perception refers to the influence that higher mental states such as beliefs and emotions have on perceptual systems. Psychological and neuroscientific studies appear to show that these states modulate the visual system at the visuomotor, attentional, and late levels of processing. However, empirical evidence showing that similar consequences occur in early stages of visual processing seems to be scarce. In this paper, I argue that psychological evidence does not seem to be either sufficient or necessary to argue in favour of or against the cognitive penetration of perception in either late or early vision. In order to do that we need to have recourse to brain imaging techniques. Thus, I introduce a neuroscientific study and argue that it seems to provide well-grounded evidence for the cognitive penetration of early vision in face perception. I also examine and reject alternative explanations to my conclusion.
Collapse
Affiliation(s)
- Ariel S Cecchi
- Department of Experimental Psychology, University College London, United Kingdom; Centre for Philosophy of Natural and Social Science, London School of Economics and Political Science, United Kingdom.
| |
Collapse
|
22
|
Cutting JE, Armstrong KL. Cryptic Emotions and the Emergence of a Metatheory of Mind in Popular Filmmaking. Cogn Sci 2018; 42:1317-1344. [PMID: 29356041 PMCID: PMC6001644 DOI: 10.1111/cogs.12586] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Revised: 11/28/2017] [Accepted: 12/01/2017] [Indexed: 12/02/2022]
Abstract
Hollywood movies can be deeply engaging and easy to understand. To succeed in this manner, feature‐length movies employ many editing techniques with strong psychological underpinnings. We explore the origins and development of one of these, the reaction shot. This shot typically shows a single, unspeaking character with modest facial expression in response to an event or to the behavior or speech of another character. In a sample of movies from 1940 to 2010, we show that the prevalence of one type of these shots—which we call the cryptic reaction shot—has grown dramatically. These shots are designed to enhance viewers’ emotional involvement with characters. They depict a facial gesture that reflects a slightly negative and slightly aroused emotional state. Their use at the end of conversations, and typically at the end of scenes, helps to leave viewers in a state of speculation about what the character is thinking and what her thoughts may mean for the ongoing narrative.
Collapse
|