1
|
Duarte SE, Yonelinas AP, Ghetti S, Geng JJ. Multisensory processing impacts memory for objects and their sources. Mem Cognit 2024:10.3758/s13421-024-01592-x. [PMID: 38831161 DOI: 10.3758/s13421-024-01592-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/10/2024] [Indexed: 06/05/2024]
Abstract
Multisensory object processing improves recognition memory for individual objects, but its impact on memory for neighboring visual objects and scene context remains largely unknown. It is therefore unclear how multisensory processing impacts episodic memory for information outside of the object itself. We conducted three experiments to test the prediction that the presence of audiovisual objects at encoding would improve memory for nearby visual objects, and improve memory for the environmental context in which they occurred. In Experiments 1a and 1b, participants viewed audiovisual-visual object pairs or visual-visual object pairs with a control sound during encoding and were subsequently tested on their memory for each object individually. In Experiment 2, objects were paired with semantically congruent or meaningless control sounds and appeared within four different scene environments. Memory for the environment was tested. Results from Experiments 1a and 1b showed that encoding a congruent audiovisual object did not significantly benefit memory for neighboring visual objects, but Experiment 2 showed that encoding a congruent audiovisual object did improve memory for the environments in which those objects were encoded. These findings suggest that multisensory processing can influence memory beyond the objects themselves and that it has a unique role in episodic memory formation. This is particularly important for understanding how memories and associations are formed in real-world situations, in which objects and their surroundings are often multimodal.
Collapse
Affiliation(s)
- Shea E Duarte
- Department of Psychology, University of California, Davis, CA, 95616, USA.
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA.
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Neuroscience, University of California, Davis, CA, 95618, USA
| | - Simona Ghetti
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA
| | - Joy J Geng
- Department of Psychology, University of California, Davis, CA, 95616, USA
- Center for Mind and Brain, University of California, Davis, CA, 95618, USA
| |
Collapse
|
2
|
Matthews CM, Ritchie KL, Laurence S, Mondloch CJ. Multiple images captured from a single encounter do not promote face learning. Perception 2024; 53:299-316. [PMID: 38454616 PMCID: PMC11088208 DOI: 10.1177/03010066241234034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 02/04/2024] [Indexed: 03/09/2024]
Abstract
Viewing multiple images of a newly encountered face improves recognition of that identity in new instances. Studies examining face learning have presented high-variability (HV) images that incorporate changes that occur from moment-to-moment (e.g., head orientation and expression) and over time (e.g., lighting, hairstyle, and health). We examined whether low-variability (LV) images (i.e., images that incorporate only moment-to-moment changes) also promote generalisation of learning such that novel instances are recognised. Participants viewed a single image, six LV images, or six HV images of a target identity before being asked to recognise novel images of that identity in a face matching task (training stimuli remained visible) or a memory task (training stimuli were removed). In Experiment 1 (n = 71), participants indicated which image(s) in 8-image arrays belonged to the target identity. In Experiment 2 (n = 73), participants indicated whether sequentially presented images belonged to the target identity. Relative to the single-image condition, sensitivity to identity improved and response biases were less conservative in the HV condition; we found no evidence of generalisation of learning in the LV condition regardless of testing protocol. Our findings suggest that day-to-day variability in appearance plays an essential role in acquiring expertise with a novel face.
Collapse
|
3
|
Tiba A, Drugaș M, Sârbu I, Simona T, Bora C, Miclăuș D, Voss L, Sanislav I, Ciurescu D. T-RAC: Study protocol of a randomised clinical trial for assessing the acceptability and preliminary efficacy of adding an exergame-augmented dynamic imagery intervention to the behavioural activation treatment of depression. PLoS One 2023; 18:e0288910. [PMID: 37523359 PMCID: PMC10389719 DOI: 10.1371/journal.pone.0288910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 06/23/2023] [Indexed: 08/02/2023] Open
Abstract
BACKGROUND Improving the existent effective treatments of depression is a promising way to optimise the effects of psychological treatments. Here we examine the effects of adding a rehabilitation type of imagery based on exergames and dynamic simulations to a short behavioural activation treatment of depression. We investigate the acceptability and the efficacy of an exergame-augmented dynamic imagery intervention added to behavioural activation treatment and associated mechanisms of change. METHODS AND ANALYSES In a two-arm pilot randomised controlled trial, the acceptability and preliminary efficacy of an exergame-augmented dynamic imagery intervention added to behavioural activation treatment for depressed individuals will be assessed. Participants (age 18-65) meeting criteria for depression are recruited by media and local announcements. 110 participants will be randomly allocated to behavioural activation plus imagery group or to standard behavioural activation group. The primary outcome is depressive symptom severity (Beck Depression Inventory II) and secondary outcomes are anhedonia, apathy and behavioural activation and avoidance. The outcomes are assessed at baseline, mid treatment, posttreatment and 3-month follow-up. Moderation and mediation analyses will be explored. An intention-to-treat approach with additional per-protocol analysis will be used for data analysis.
Collapse
Affiliation(s)
- Alexandru Tiba
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Marius Drugaș
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Ioana Sârbu
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Trip Simona
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Carmen Bora
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Daiana Miclăuș
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Laura Voss
- The Hull York Medical School, University of York, York, United Kingdom
| | - Ioana Sanislav
- Department of Psychology, University of Oradea, Oradea, Romania
| | - Daniel Ciurescu
- Faculty of Medicine, Transilvania University, Brașov, Romania
| |
Collapse
|
4
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
5
|
Daprati E, Balestrucci P, Nico D. Do graspable objects always leave a motor signature? A study on memory traces. Exp Brain Res 2022; 240:3193-3206. [PMID: 36271939 PMCID: PMC9678995 DOI: 10.1007/s00221-022-06487-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 10/14/2022] [Indexed: 12/30/2022]
Abstract
Several studies have reported the existence of reciprocal interactions between the type of motor activity physically performed on objects and the conceptual knowledge that is retained of them. Whether covert motor activity plays a similar effect is less clear. Certainly, objects are strong triggers for actions, and motor components can make the associated concepts more memorable. However, addition of an action-related memory trace may not always be automatic and could rather depend on 'how' objects are encountered. To test this hypothesis, we compared memory for objects that passive observers experienced as verbal labels (the word describing them), visual images (color photographs) and actions (pantomimes of object use). We predicted that the more direct the involvement of action-related representations the more effective would be the addition of a motor code to the experience and the more accurate would be the recall. Results showed that memory for objects presented as words i.e., a format that might only indirectly prime the sensorimotor system, was generally less accurate compared to memory for objects presented as photographs or pantomimes, which are more likely to directly elicit motor simulation processes. In addition, free recall of objects experienced as pantomimes was more accurate when these items afforded actions performed towards one's body than actions directed away from the body. We propose that covert motor activity can contribute to objects' memory, but the beneficial addition of a motor code to the experience is not necessarily automatic. An advantage is more likely to emerge when the observer is induced to take a first-person stance during the encoding phase, as may happen for objects affording actions directed towards the body, which obviously carry more relevance for the actor.
Collapse
Affiliation(s)
- Elena Daprati
- grid.6530.00000 0001 2300 0941Dipartimento di Medicina dei Sistemi, Università di Roma Tor Vergata, Via Montpellier 1, 00133 Rome, Italy
| | - Priscilla Balestrucci
- grid.6582.90000 0004 1936 9748Faculty for Computer Science, Engineering, and Psychology, Applied Cognitive Psychology, Ulm University, 89081 Ulm, Germany
| | - Daniele Nico
- grid.7841.aDipartimento di Psicologia, Università di Roma la Sapienza, 00185 Rome, Italy
| |
Collapse
|
6
|
Marian V, Hayakawa S, Schroeder SR. Cross-Modal Interaction Between Auditory and Visual Input Impacts Memory Retrieval. Front Neurosci 2021; 15:661477. [PMID: 34381328 PMCID: PMC8350348 DOI: 10.3389/fnins.2021.661477] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
Abstract
How we perceive and learn about our environment is influenced by our prior experiences and existing representations of the world. Top-down cognitive processes, such as attention and expectations, can alter how we process sensory stimuli, both within a modality (e.g., effects of auditory experience on auditory perception), as well as across modalities (e.g., effects of visual feedback on sound localization). Here, we demonstrate that experience with different types of auditory input (spoken words vs. environmental sounds) modulates how humans remember concurrently-presented visual objects. Participants viewed a series of line drawings (e.g., picture of a cat) displayed in one of four quadrants while listening to a word or sound that was congruent (e.g., "cat" or ), incongruent (e.g., "motorcycle" or ), or neutral (e.g., a meaningless pseudoword or a tonal beep) relative to the picture. Following the encoding phase, participants were presented with the original drawings plus new drawings and asked to indicate whether each one was "old" or "new." If a drawing was designated as "old," participants then reported where it had been displayed. We find that words and sounds both elicit more accurate memory for what objects were previously seen, but only congruent environmental sounds enhance memory for where objects were positioned - this, despite the fact that the auditory stimuli were not meaningful spatial cues of the objects' locations on the screen. Given that during real-world listening conditions, environmental sounds, but not words, reliably originate from the location of their referents, listening to sounds may attune the visual dorsal pathway to facilitate attention and memory for objects' locations. We propose that audio-visual associations in the environment and in our previous experience jointly contribute to visual memory, strengthening visual memory through exposure to auditory input.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Scott R. Schroeder
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, NY, United States
| |
Collapse
|
7
|
Coutanche MN, Koch GE, Paulus JP. Influences on memory for naturalistic visual episodes: sleep, familiarity, and traits differentially affect forms of recall. Learn Mem 2020; 27:284-291. [PMID: 32540918 PMCID: PMC7301751 DOI: 10.1101/lm.051300.119] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 05/19/2020] [Indexed: 11/24/2022]
Abstract
The memories we form are composed of information that we extract from multifaceted episodes. Static stimuli and paired associations have proven invaluable stimuli for understanding memory, but real-life events feature spatial and temporal dimensions that help form new retrieval paths. We ask how the ability to recall semantic, temporal, and spatial aspects (the "what, when, and where") of naturalistic episodes is affected by three influences-prior familiarity, postencoding sleep, and individual differences-by testing their influence on three forms of recall: cued recall, free recall, and the extent that recalled details are recombined for a novel prompt. Naturalistic videos of events with rare animals were presented to 115 participants, randomly assigned to receive a 12- or 24-h delay with sleep and/or wakefulness. Participants' immediate and delayed recall was tested and coded by its spatial, temporal, and semantic content. We find that prior familiarity with items featured in events improved cued recall, but not free recall, particularly for temporal and spatial details. In contrast, postencoding sleep, relative to wakefulness, improved free recall, but not cued recall, of all forms of content. Finally, individuals with higher trait scores in the Survey of Autobiographical Memory spontaneously incorporated more spatial details during free recall, and more event details (at a trend level) in a novel recombination recall task. These findings show that prior familiarity, postencoding sleep, and memory traits can each enhance a different form of recall. More broadly, this work highlights that recall is heterogeneous in response to different influences on memory.
Collapse
Affiliation(s)
- Marc N Coutanche
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Brain Institute, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| | - Griffin E Koch
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| | - John P Paulus
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
8
|
Ack Baraly KT, Muyingo L, Beaudoin C, Karami S, Langevin M, Davidson PSR. Database of Emotional Videos from Ottawa (DEVO). COLLABRA-PSYCHOLOGY 2020. [DOI: 10.1525/collabra.180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
We present a collection of emotional video clips that can be used in ways similar to static images (e.g., the International Affective Picture System, IAPS; Lang, Bradley, & Cuthbert, 2008). The Database of Emotional Videos from Ottawa (DEVO) includes 291 brief video clips (mean duration = 5.42 s; SD = 2.89 s; range = 3–15 s) extracted from obscure sources to reduce their familiarity and to avoid influencing participants’ emotional responses. In Study 1, ratings of valence and arousal (measured with the Self Assessment Manikins from IAPS) and impact (Croucher, Calder, Ramponi, Barnard, & Murphy, 2011) were collected from 154 participants (82 women; mean age = 19.88 years; SD = 2.83 years), in a between-subjects design to avoid potential halo effects across the three ratings (Saal, Downey, & Lahey, 1980). Ratings collected online in a new set of 124 students with a within-subjects design (Study 2) were significantly correlated with the original sample’s. The clips were unfamiliar, having been seen previously by fewer than 2% of participants on average. The ratings consistently revealed the expected U-shaped relationships between valence and arousal/impact, and a strong positive correlation between arousal and impact. Hierarchical cluster analysis of the Study 1 ratings suggested seven groups of clips varying in valence, arousal, and impact, although the Study 2 ratings suggested five groups of clips. These clips should prove useful for a wide range of research on emotion and behaviour.
Collapse
Affiliation(s)
- Kylee T. Ack Baraly
- School of Psychology, University of Ottawa, CA
- University of Grenoble Alpes, CNRS, LPNC, Grenoble, FR
- University of Savoie Mont Blanc, LPNC, Chambéry, FR
| | | | | | | | | | | |
Collapse
|
9
|
Gender Imbalance in Instructional Dynamic Versus Static Visualizations: a Meta-analysis. EDUCATIONAL PSYCHOLOGY REVIEW 2019. [DOI: 10.1007/s10648-019-09469-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
10
|
Ginet M, Dodier O, Bardin B, Désert M, Greffeuille C, Verkampt F. Perspective Effects on Recall in a Testimony Paradigm. The Journal of General Psychology 2018; 145:313-341. [PMID: 30325715 DOI: 10.1080/00221309.2018.1494126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The two present studies examined the influence of perspective instructions given during encoding and retrieval on the recall of a visual event. Participants viewed slides or a film depicting a day in the life of a man. Before viewing the to-be-remembered event, they were instructed to adopt the perspective of an alcoholic vs. an unemployed man vs. no perspective (Experiment 1), or of an unemployed man vs. no perspective (Experiment 2). Participants in the first study were interviewed twice, with the second recall being preceded by either a change perspective instruction or without any specific instruction. In the second study, participants were interviewed using either a cognitive interview (CI) or a CI without the change perspective instruction. Results showed that adopting a perspective during encoding impaired recall performance and failed to demonstrate a significant benefit of the change perspective instruction. The theoretical and practical implications of these results are discussed.
Collapse
|
11
|
Abstract
Recognition memory was investigated for individual frames extracted from temporally continuous, visually rich film segments of 5–15 min. Participants viewed a short clip from a film in either a coherent or a jumbled order, followed by a recognition test of studied frames. Foils came either from an earlier or a later part of the film (Experiment 1) or from deleted segments selected from random cuts of varying duration (0.5 to 30 s) within the film itself (Experiment 2). When the foils came from an earlier or later part of the film (Experiment 1), recognition was excellent, with the hit rate far exceeding the false-alarm rate (.78 vs. 18). In Experiment 2, recognition was far worse, with the hit rate (.76) exceeding the false-alarm rate only for foils drawn from the longest cuts (15 and 30 s) and matching the false-alarm rate for the 5 s segments. When the foils were drawn from the briefest cuts (0.5 and 1.0 s), the false-alarm rate exceeded the hit rate. Unexpectedly, jumbling had no effect on recognition in either experiment. These results are consistent with the view that memory for complex visually temporal events is excellent, with the integrity unperturbed by disruption of the global structure of the visual stream. Disruption of memory was observed only when foils were drawn from embedded segments of duration less than 5 s, an outcome consistent with the view that memory at these shortest durations are consolidated with expectations drawn from the previous stream.
Collapse
Affiliation(s)
- Ryan Ferguson
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| | - Donald Homa
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| | - Derek Ellis
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
12
|
Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes. Mem Cognit 2017; 44:390-402. [PMID: 26620810 DOI: 10.3758/s13421-015-0575-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Collapse
|
13
|
Candan A, Cutting JE, DeLong JE. RSVP at the movies: dynamic images are remembered better than static images when resources are limited. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1159636] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
14
|
Retinotopy and attention to the face and house images in the human visual cortex. Exp Brain Res 2016; 234:1623-35. [PMID: 26838358 DOI: 10.1007/s00221-016-4562-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 01/13/2016] [Indexed: 01/25/2023]
Abstract
Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.
Collapse
|
15
|
Matthews WJ, Meck WH. Time perception: the bad news and the good. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2014; 5:429-446. [PMID: 25210578 PMCID: PMC4142010 DOI: 10.1002/wcs.1298] [Citation(s) in RCA: 92] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 04/12/2014] [Accepted: 05/09/2014] [Indexed: 11/12/2022]
Abstract
Time perception is fundamental and heavily researched, but the field faces a number of obstacles to theoretical progress. In this advanced review, we focus on three pieces of 'bad news' for time perception research: temporal perception is highly labile across changes in experimental context and task; there are pronounced individual differences not just in overall performance but in the use of different timing strategies and the effect of key variables; and laboratory studies typically bear little relation to timing in the 'real world'. We describe recent examples of these issues and in each case offer some 'good news' by showing how new research is addressing these challenges to provide rich insights into the neural and information-processing bases of timing and time perception. WIREs Cogn Sci 2014, 5:429-446. doi: 10.1002/wcs.1298 This article is categorized under: Psychology > Perception and Psychophysics Neuroscience > Cognition.
Collapse
Affiliation(s)
| | - Warren H Meck
- Department of Psychology and Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
16
|
Bradley C, Pearson J. The sensory components of high-capacity iconic memory and visual working memory. Front Psychol 2012; 3:355. [PMID: 23055993 PMCID: PMC3457081 DOI: 10.3389/fpsyg.2012.00355] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2012] [Accepted: 09/02/2012] [Indexed: 11/13/2022] Open
Abstract
Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory.
Collapse
Affiliation(s)
- Claire Bradley
- The School of Psychology, The University of New South Wales Sydney, NSW, Australia ; Ecole Normale Supérieure de Cachan Cachan, France
| | | |
Collapse
|
17
|
Representation of dynamic spatial configurations in visual short-term memory. Atten Percept Psychophys 2011; 74:397-415. [PMID: 22090188 DOI: 10.3758/s13414-011-0242-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Butcher N, Lander K, Fang H, Costen N. The effect of motion at encoding and retrieval for same- and other-race face recognition. Br J Psychol 2011; 102:931-42. [DOI: 10.1111/j.2044-8295.2011.02060.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Matthews WJ. Stimulus repetition and the perception of time: the effects of prior exposure on temporal discrimination, judgment, and production. PLoS One 2011; 6:e19815. [PMID: 21573020 PMCID: PMC3090413 DOI: 10.1371/journal.pone.0019815] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2011] [Accepted: 04/13/2011] [Indexed: 11/29/2022] Open
Abstract
It has been suggested that repeated stimuli have shorter subjective duration than novel items, perhaps because of a reduction in the neural response to repeated presentations of the same object. Five experiments investigated the effects of repetition on time perception and found further evidence that immediate repetition reduces apparent duration, consistent with the idea that subjective duration is partly based on neural coding efficiency. In addition, the experiments found (a) no effect of repetition on the precision of temporal discrimination, (b) that the effects of repetition disappeared when there was a modest lag between presentations, (c) that, across participants, the size of the repetition effect correlated with temporal discrimination, and (d) that the effects of repetition suggested by a temporal production task were the opposite of those suggested by temporal judgments. The theoretical and practical implications of these results are discussed.
Collapse
Affiliation(s)
- William J Matthews
- Department of Psychology, University of Essex, Colchester, United Kingdom.
| |
Collapse
|
20
|
Walk this way: Approaching bodies can influence the processing of faces. Cognition 2011; 118:17-31. [DOI: 10.1016/j.cognition.2010.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2010] [Revised: 09/20/2010] [Accepted: 09/24/2010] [Indexed: 11/20/2022]
|
21
|
|
22
|
Brown TA, Munger MP. Representational momentum, spatial layout, and viewpoint dependency. VISUAL COGNITION 2010. [DOI: 10.1080/13506280903336535] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Buratto LG, Matthews WJ, Lamberts K. Short article: When are moving images remembered better? Study–test congruence and the dynamic superiority effect. Q J Exp Psychol (Hove) 2009; 62:1896-903. [DOI: 10.1080/17470210902883263] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
It has previously been shown that moving images are remembered better than static ones. In two experiments, we investigated the basis for this dynamic superiority effect. Participants studied scenes presented as a single static image, a sequence of still images, or a moving video clip, and 3 days later completed a recognition test in which familiar and novel scenes were presented in all three formats. We found a marked congruency effect: For a given study format, accuracy was highest when test items were shown in the same format. Neither the dynamic superiority effect nor the study–test congruency effect was affected by encoding (Experiment 1) or retrieval (Experiment 2) manipulations, suggesting that these effects are relatively impervious to strategic control. The results demonstrate that the spatio-temporal properties of complex, realistic scenes are preserved in long-term memory.
Collapse
|
24
|
Psychophysics and the judgment of price: Judging complex objects on a non-physical dimension elicits sequential effects like those in perceptual tasks. JUDGMENT AND DECISION MAKING 2009. [DOI: 10.1017/s1930297500000711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractWhen participants in psychophysical experiments are asked to estimate or identify stimuli which differ on a single physical dimension, their judgments are influenced by the local experimental context — the item presented and judgment made on the previous trial. It has been suggested that similar sequential effects occur in more naturalistic, real-world judgments. In three experiments we asked participants to judge the prices of a sequence of items. In Experiment 1, judgments were biased towards the previous response (assimilation) but away from the true value of the previous item (contrast), a pattern which matches that found in psychophysical research. In Experiments 2A and 2B, we manipulated the provision of feedback and the expertise of the participants, and found that feedback reduced the effect of the previous judgment and shifted the effect of the previous item's true price from contrast to assimilation. Finally, in all three experiments we found that judgments were biased towards the centre of the range, a phenomenon known as the “regression effect” in psychophysics. These results suggest that the most recently-presented item is a point of reference for the current judgment. The findings inform our understanding of the judgment process, constrain the explanations for local context effects put forward by psychophysicists, and carry practical importance for real-world situations in which contextual bias may degrade the accuracy of judgments.
Collapse
|