1
|
Martarelli CS, Chiquet S, Ertl M. Keeping track of reality: embedding visual memory in natural behaviour. Memory 2023; 31:1295-1305. [PMID: 37727126 DOI: 10.1080/09658211.2023.2260148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/21/2023] [Indexed: 09/21/2023]
Abstract
Since immersive virtual reality (IVR) emerged as a research method in the 1980s, the focus has been on the similarities between IVR and actual reality. In this vein, it has been suggested that IVR methodology might fill the gap between laboratory studies and real life. IVR allows for high internal validity (i.e., a high degree of experimental control and experimental replicability), as well as high external validity by letting participants engage with the environment in an almost natural manner. Despite internal validity being crucial to experimental designs, external validity also matters in terms of the generalizability of results. In this paper, we first highlight and summarise the similarities and differences between IVR, desktop situations (both non-immersive VR and computer experiments), and reality. In the second step, we propose that IVR is a promising tool for visual memory research in terms of investigating the representation of visual information embedded in natural behaviour. We encourage researchers to carry out experiments on both two-dimensional computer screens and in immersive virtual environments to investigate visual memory and validate and replicate the findings. IVR is valuable because of its potential to improve theoretical understanding and increase the psychological relevance of the findings.
Collapse
Affiliation(s)
| | - Sandra Chiquet
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Matthias Ertl
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Chiquet S, Martarelli CS, Mast FW. Imagery-related eye movements in 3D space depend on individual differences in visual object imagery. Sci Rep 2022; 12:14136. [PMID: 35986076 PMCID: PMC9391428 DOI: 10.1038/s41598-022-18080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 08/04/2022] [Indexed: 11/09/2022] Open
Abstract
During recall of visual information people tend to move their eyes even though there is nothing to see. Previous studies indicated that such eye movements are related to the spatial location of previously seen items on 2D screens, but they also showed that eye movement behavior varies significantly across individuals. The reason for these differences remains unclear. In the present study we used immersive virtual reality to investigate how individual tendencies to process and represent visual information contribute to eye fixation patterns in visual imagery of previously inspected objects in three-dimensional (3D) space. We show that participants also look back to relevant locations when they are free to move in 3D space. Furthermore, we found that looking back to relevant locations depends on individual differences in visual object imagery abilities. We suggest that object visualizers rely less on spatial information because they tend to process and represent the visual information in terms of color and shape rather than in terms of spatial layout. This finding indicates that eye movements during imagery are subject to individual strategies, and the immersive setting in 3D space made individual differences more likely to unfold.
Collapse
|
3
|
Reinstating location improves mnemonic access but not fidelity of visual mental representations. Cortex 2022; 156:39-53. [DOI: 10.1016/j.cortex.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/03/2022] [Accepted: 08/04/2022] [Indexed: 11/18/2022]
|
4
|
A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm. Behav Res Methods 2021; 53:2049-2068. [PMID: 33754324 PMCID: PMC8516795 DOI: 10.3758/s13428-020-01513-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2020] [Indexed: 11/08/2022]
Abstract
We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.
Collapse
|
5
|
Martarelli CS, Mast FW. Pictorial low-level features in mental images: evidence from eye fixations. PSYCHOLOGICAL RESEARCH 2021; 86:350-363. [PMID: 33751199 DOI: 10.1007/s00426-021-01497-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 02/26/2021] [Indexed: 11/29/2022]
Abstract
It is known that eye movements during object imagery reflect areas visited during encoding. But will eye movements also reflect pictorial low-level features of imagined stimuli? In this paper, three experiments are reported in which we investigate whether low-level properties of mental images elicit specific eye movements. Based on the conceptualization of mental images as depictive representations, we expected low-level visual features to influence eye fixations during mental imagery, in the absence of a visual input. In a first experiment, twenty-five participants performed a visual imagery task with high vs. low spatial frequency and high vs. low contrast gratings. We found that both during visual perception and during mental imagery, first fixations were more often allocated to the low spatial frequency-high contrast grating, thus showing that eye fixations were influenced not only by physical properties of visual stimuli but also by its imagined counterpart. In a second experiment, twenty-two participants imagined high contrast and low contrast stimuli that they had not encoded before. Again, participants allocated more fixations to the high contrast mental images than to the low contrast mental images. In a third experiment, we ruled out task difficulty as confounding variable. Our results reveal that low-level visual features are represented in the mind's eye and thus, they contribute to the characterization of mental images in terms of how much perceptual information is re-instantiated during mental imagery.
Collapse
Affiliation(s)
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
6
|
Umar H, Mast FW, Cacchione T, Martarelli CS. The prioritization of visuo-spatial associations during mental imagery. Cogn Process 2021; 22:227-237. [PMID: 33404898 DOI: 10.1007/s10339-020-01010-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 12/10/2020] [Indexed: 10/22/2022]
Abstract
While previous research has shown that during mental imagery participants look back to areas visited during encoding it is unclear what happens when information presented during encoding is incongruent. To investigate this research question, we presented 30 participants with incongruent audio-visual associations (e.g. the image of a car paired with the sound of a cat) and later asked them to create a congruent mental representation based on the auditory cue (e.g. to create a mental representation of a cat while hearing the sound of a cat). The results revealed that participants spent more time in the areas where they previously saw the object and that incongruent audio-visual information during encoding did not appear to interfere with the generation and maintenance of mental images. This finding suggests that eye movements can be flexibly employed during mental imagery depending on the demands of the task.
Collapse
Affiliation(s)
- Hafidah Umar
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kelantan, Malaysia.,Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kelantan, Malaysia.,Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Trix Cacchione
- Department of Developmental Psychology, School of Education, University of Applied Sciences and Arts Northwestern Switzerland, Windisch, Switzerland
| | | |
Collapse
|
7
|
Kinjo H, Fooken J, Spering M. Do eye movements enhance visual memory retrieval? Vision Res 2020; 176:80-90. [PMID: 32827879 DOI: 10.1016/j.visres.2020.07.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 07/10/2020] [Accepted: 07/15/2020] [Indexed: 10/23/2022]
Abstract
When remembering an object at a given location, participants tend to return their gaze to that location even after the object has disappeared, known as Looking-at-Nothing (LAN). However, it is unclear whether LAN is associated with better memory performance. Previous studies reporting beneficial effects of LAN have often not systematically manipulated or assessed eye movements. We asked 20 participants to remember the location and identity of eight objects arranged in a circle, shown for 5 s. Participants were prompted to judge whether a location statement (e.g., "Star Right") was correct or incorrect, or referred to a previously unseen object. During memory retrieval, participants either fixated in the screen center or were free to move their eyes. Results reveal no difference in memory accuracy and response time between free-viewing and fixation while a LAN effect was found for saccades during free viewing, but not for microsaccades during fixation. Memory performance was better in those free-viewing trials in which participants made a saccade to the critical location, and scaled with saccade accuracy. These results indicate that saccade kinematics might be related to both memory performance and memory retrieval processes, but the strength of their link would differ between individuals and task demands.
Collapse
Affiliation(s)
- Hikari Kinjo
- Faculty of Psychology, Meiji Gakuin University, Tokyo, Japan; Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada.
| | - Jolande Fooken
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| | - Miriam Spering
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada; Center for Brain Health, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
8
|
Rosner A, von Helversen B. Memory shapes judgments: Tracing how memory biases judgments by inducing the retrieval of exemplars. Cognition 2019; 190:165-169. [PMID: 31100546 DOI: 10.1016/j.cognition.2019.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 05/03/2019] [Accepted: 05/06/2019] [Indexed: 11/18/2022]
Abstract
When making judgments (e.g., about the quality of job candidates) decision makers should ignore salient, but unrepresentative information (e.g., the person's name). However, research suggests that salient information influences judgments, possibly because memories of past encounters with similar information are integrated into the judgment. We studied eye movements to trace the link between the retrieval of past instances and their influence on judgments. Participants were more likely to look at screen locations where exemplars matching items on a name attribute had appeared, suggesting the retrieval of exemplars. Eye movements to exemplar locations predicted judgments, explaining why names influenced judgments. The results provide insights into how exemplars are integrated into the judgment process when assessing memory retrieval online.
Collapse
Affiliation(s)
- Agnes Rosner
- Department of Psychology, University of Zurich, Switzerland.
| | | |
Collapse
|
9
|
Sulutvedt U, Mannix TK, Laeng B. Gaze and the Eye Pupil Adjust to Imagined Size and Distance. Cogn Sci 2018; 42:3159-3176. [DOI: 10.1111/cogs.12684] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 05/23/2018] [Accepted: 07/03/2018] [Indexed: 10/28/2022]
|
10
|
Less imageable words lead to more looks to blank locations during memory retrieval. PSYCHOLOGICAL RESEARCH 2018; 84:667-684. [PMID: 30173279 PMCID: PMC7109172 DOI: 10.1007/s00426-018-1084-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 08/21/2018] [Indexed: 11/07/2022]
Abstract
People revisit spatial locations of visually encoded information when they are asked to retrieve that information, even when the visual image is no longer present. Such “looking at nothing” during retrieval is likely modulated by memory load (i.e., mental effort to maintain and reconstruct information) and the strength of mental representations. We investigated whether words that are more difficult to remember also lead to more looks to relevant, blank locations. Participants were presented four nouns on a two by two grid. A number of lexico-semantic variables were controlled to form high-difficulty and low-difficulty noun sets. Results reveal more frequent looks to blank locations during retrieval of high-difficulty nouns compared to low-difficulty ones. Mixed-effects modelling demonstrates that imagery-related semantic factors (imageability and concreteness) predict looking at nothing during retrieval. Results provide the first direct evidence that looking at nothing is modulated by word difficulty and in particular, word imageability. Overall, the research provides substantial support to the integrated memory account for linguistic stimuli and looking at nothing as a form of mental imagery.
Collapse
|
11
|
Covert shifts of attention can account for the functional role of “eye movements to nothing”. Mem Cognit 2017; 46:230-243. [DOI: 10.3758/s13421-017-0760-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
12
|
Roldan SM. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery. Front Psychol 2017; 8:833. [PMID: 28588538 PMCID: PMC5441390 DOI: 10.3389/fpsyg.2017.00833] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 05/08/2017] [Indexed: 11/13/2022] Open
Abstract
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Collapse
Affiliation(s)
- Stephanie M. Roldan
- Virginia Tech Visual Neuroscience Laboratory, Psychology Department, Virginia Polytechnic Institute and State University, BlacksburgVA, United States
| |
Collapse
|
13
|
Time in the eye of the beholder: Gaze position reveals spatial-temporal associations during encoding and memory retrieval of future and past. Mem Cognit 2016; 45:40-48. [DOI: 10.3758/s13421-016-0639-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|