1
|
Frisoni M, Di Ghionno M, Guidotti R, Tosoni A, Sestieri C. Reconstructive nature of temporal memory for movie scenes. Cognition 2020; 208:104557. [PMID: 33373938 DOI: 10.1016/j.cognition.2020.104557] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 12/18/2020] [Accepted: 12/19/2020] [Indexed: 11/16/2022]
Abstract
Remembering when events took place is a key component of episodic memory. Using a sensitive behavioral measure, the present study investigates whether spontaneous event segmentation and script-based prior knowledge affect memory for the time of movie scenes. In three experiments, different groups of participants were asked to indicate when short video clips extracted from a previously encoded movie occurred on a horizontal timeline that represented the video duration. When participants encoded the entire movie, they were more precise at judging the temporal occurrence of clips extracted from the beginning and the end of the film compared to its middle part, but also at judging clips that were closer to event boundaries. Removing the final part of the movie from the encoding session resulted in a systematic bias in memory for time. Specifically, participants increasingly underestimated the time of occurrence of the video clips as a function of their proximity to the missing part of the movie. An additional experiment indicated that such an underestimation effect generalizes to different audio-visual material and does not necessarily reflect poor temporal memory. By showing that memories are moved in time to make room for missing information, the present study demonstrates that narrative time can be adapted to fit a standard template regardless of what has been effectively encoded, in line with reconstructive theories of memory.
Collapse
Affiliation(s)
- Matteo Frisoni
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio of Chieti-Pescara, Via dei Vestini 31, Chieti 66100, Italy.
| | - Monica Di Ghionno
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio of Chieti-Pescara, Via dei Vestini 31, Chieti 66100, Italy.
| | - Roberto Guidotti
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio of Chieti-Pescara, Via dei Vestini 31, Chieti 66100, Italy.
| | - Annalisa Tosoni
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio of Chieti-Pescara, Via dei Vestini 31, Chieti 66100, Italy.
| | - Carlo Sestieri
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio of Chieti-Pescara, Via dei Vestini 31, Chieti 66100, Italy.
| |
Collapse
|
3
|
Ack Baraly KT, Muyingo L, Beaudoin C, Karami S, Langevin M, Davidson PSR. Database of Emotional Videos from Ottawa (DEVO). COLLABRA-PSYCHOLOGY 2020. [DOI: 10.1525/collabra.180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
We present a collection of emotional video clips that can be used in ways similar to static images (e.g., the International Affective Picture System, IAPS; Lang, Bradley, & Cuthbert, 2008). The Database of Emotional Videos from Ottawa (DEVO) includes 291 brief video clips (mean duration = 5.42 s; SD = 2.89 s; range = 3–15 s) extracted from obscure sources to reduce their familiarity and to avoid influencing participants’ emotional responses. In Study 1, ratings of valence and arousal (measured with the Self Assessment Manikins from IAPS) and impact (Croucher, Calder, Ramponi, Barnard, & Murphy, 2011) were collected from 154 participants (82 women; mean age = 19.88 years; SD = 2.83 years), in a between-subjects design to avoid potential halo effects across the three ratings (Saal, Downey, & Lahey, 1980). Ratings collected online in a new set of 124 students with a within-subjects design (Study 2) were significantly correlated with the original sample’s. The clips were unfamiliar, having been seen previously by fewer than 2% of participants on average. The ratings consistently revealed the expected U-shaped relationships between valence and arousal/impact, and a strong positive correlation between arousal and impact. Hierarchical cluster analysis of the Study 1 ratings suggested seven groups of clips varying in valence, arousal, and impact, although the Study 2 ratings suggested five groups of clips. These clips should prove useful for a wide range of research on emotion and behaviour.
Collapse
Affiliation(s)
- Kylee T. Ack Baraly
- School of Psychology, University of Ottawa, CA
- University of Grenoble Alpes, CNRS, LPNC, Grenoble, FR
- University of Savoie Mont Blanc, LPNC, Chambéry, FR
| | | | | | | | | | | |
Collapse
|
4
|
Abstract
Incongruence between the narrated (encoded) order and the actual chronological order of events is ubiquitous in various kinds of narratives and information modalities. The iconicity assumption in text comprehension proposes that readers will by default assume the chronological order to match the narrated order. However, it is not clear whether this iconicity assumption would directly bias inferred chronology of events and memory of their narrated order. In the current study, using non-linearly narrated video narratives as encoding materials, we dissociated the narrated order and the underlying chronological order of events. In Experiment 1, we found that participants' judgments of the chronological order of events were biased by the narrated order, but not vice versa. In Experiment 2, when the chronological positions of events were provided during encoding, participants' judgments of the chronological order were not biased by the narrated order, rather, their memory of the narrated order of events was biased by the chronological order. Interpreting the bias under a descriptive Bayesian framework, we offer a new perspective on the role of the iconicity assumption as prior belief, apart from prior knowledge about event sequences, in event understanding as well as memory.
Collapse
Affiliation(s)
- Xinming Xu
- a Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics Ministry of Education, School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China
| | - Sze Chai Kwok
- a Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics Ministry of Education, School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China.,b Shanghai Key Laboratory of Magnetic Resonance , East China Normal University , Shanghai , People's Republic of China.,c NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai , Shanghai , People's Republic of China
| |
Collapse
|
5
|
Tang H, Singer J, Ison MJ, Pivazyan G, Romaine M, Frias R, Meller E, Boulin A, Carroll J, Perron V, Dowcett S, Arellano M, Kreiman G. Predicting episodic memory formation for movie events. Sci Rep 2016; 6:30175. [PMID: 27686330 PMCID: PMC5043190 DOI: 10.1038/srep30175] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2016] [Accepted: 06/28/2016] [Indexed: 11/25/2022] Open
Abstract
Episodic memories are long lasting and full of detail, yet imperfect and malleable. We quantitatively evaluated recollection of short audiovisual segments from movies as a proxy to real-life memory formation in 161 subjects at 15 minutes up to a year after encoding. Memories were reproducible within and across individuals, showed the typical decay with time elapsed between encoding and testing, were fallible yet accurate, and were insensitive to low-level stimulus manipulations but sensitive to high-level stimulus properties. Remarkably, memorability was also high for single movie frames, even one year post-encoding. To evaluate what determines the efficacy of long-term memory formation, we developed an extensive set of content annotations that included actions, emotional valence, visual cues and auditory cues. These annotations enabled us to document the content properties that showed a stronger correlation with recognition memory and to build a machine-learning computational model that accounted for episodic memory formation in single events for group averages and individual subjects with an accuracy of up to 80%. These results provide initial steps towards the development of a quantitative computational theory capable of explaining the subjective filtering steps that lead to how humans learn and consolidate memories.
Collapse
Affiliation(s)
- Hanlin Tang
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Program in Biophysics, Harvard University, Cambridge, MA 02138, USA
| | - Jed Singer
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Matias J. Ison
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | | | | | - Rosa Frias
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Elizabeth Meller
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
| | | | | | - Victoria Perron
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
| | | | - Marlise Arellano
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Gabriel Kreiman
- Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Program in Biophysics, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|