1
|
Abstract
Navigation - determining how to get from where you are to somewhere else - has obvious importance for the survival of motile animals. A new neuroimaging study has revealed that, in the human brain, the occipital place area detects the number of possible paths in a vista.
Collapse
|
2
|
The roles of order, distance, and interstitial items in temporal visual statistical learning. Atten Percept Psychophys 2018; 80:1409-1419. [PMID: 29956264 DOI: 10.3758/s13414-018-1556-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are adept at learning regularities in a visual environment, even without explicit cues to structure and in the absence of instruction-this has been termed "visual statistical learning" (VSL). The nature of the representations resulting from VSL are still poorly understood. In five experiments, we examined the specificity of temporal VSL representations. In Experiments 1A, 1B, and 2, we compared recognition rates of triplets and all embedded pairs to chance. Robust learning of all structures was evident, and even pairs of non-adjacent items in a sequentially presented triplet (AC extracted from a triplet composed of ABC) were recognized at above-chance levels. In Experiment 3, we asked whether people could recognize rearranged pairs to examine the flexibility of learned representations. Recognition of all possible orders of target triplets and pairs was significantly higher than chance, and there were no differences between canonical orderings and their corresponding randomized orderings, suggesting that learners were not dependent upon originally experienced stimulus orderings to recognize co-occurrence. Experiment 4 demonstrates the essential role of an interstitial item in VSL representations. By comparing the learning of quadruplet sets (e.g., ABCD) and triplet sets (e.g., ABC), we found learning of AC and BD in ABCD (quadruplet) sets were better than the learning of AC in ABC (triplet) sets. This pattern of results might result from the critical role of interstitial items in statistical learning. In short, our work supports the idea of generalized representation in VSL and provides evidence about how this representation is structured.
Collapse
|
3
|
Brockmole JR, Henderson JM. Short Article: Recognition and Attention Guidance during Contextual Cueing in Real-World Scenes: Evidence from Eye Movements. Q J Exp Psychol (Hove) 2018; 59:1177-87. [PMID: 16769618 DOI: 10.1080/17470210600665996] [Citation(s) in RCA: 88] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.
Collapse
|
4
|
Affiliation(s)
- Beatrix Emo
- Chair of Cognitive Science, ETH Zürich, Zürich, Switzerland
| |
Collapse
|
5
|
Abstract
Boundary extension is a common false memory error, in which people confidently remember
seeing a wider angle view of the scene than was viewed. Previous research found that
boundary extension is scene-specific and did not examine this phenomenon in nonscenes. The
present research explored boundary extension in cropped face images. Participants
completed either a short-term or a long-term condition of the task. During the encoding,
they observed photographs of faces, cropped either in a forehead or in a chin area, and
subsequently performed face recognition through a forced-choice selection. The recognition
options represented different degrees of boundary extension and boundary restriction
errors. Eye-tracking and performance data were collected. The results demonstrated
boundary extension in both memory conditions. Furthermore, previous literature reported
the asymmetry in amounts of expansion at different sides of an image. The present work
provides the evidence of asymmetry in boundary extension. In the short-term condition,
boundary extension errors were more pronounced for forehead, than for chin face areas.
Finally, this research examined the relationships between the measures of boundary
extension, imagery, and emotion. The results suggest that individual differences in
emotional ability and object, but not spatial, imagery could be associated with boundary
extension in face processing.
Collapse
Affiliation(s)
- Olesya Blazhenkova
- Faculty of Arts and Social Sciences, Sabancı University, Istanbul, Turkey
| |
Collapse
|
6
|
Abstract
Recognition memory was investigated for individual frames extracted from temporally continuous, visually rich film segments of 5–15 min. Participants viewed a short clip from a film in either a coherent or a jumbled order, followed by a recognition test of studied frames. Foils came either from an earlier or a later part of the film (Experiment 1) or from deleted segments selected from random cuts of varying duration (0.5 to 30 s) within the film itself (Experiment 2). When the foils came from an earlier or later part of the film (Experiment 1), recognition was excellent, with the hit rate far exceeding the false-alarm rate (.78 vs. 18). In Experiment 2, recognition was far worse, with the hit rate (.76) exceeding the false-alarm rate only for foils drawn from the longest cuts (15 and 30 s) and matching the false-alarm rate for the 5 s segments. When the foils were drawn from the briefest cuts (0.5 and 1.0 s), the false-alarm rate exceeded the hit rate. Unexpectedly, jumbling had no effect on recognition in either experiment. These results are consistent with the view that memory for complex visually temporal events is excellent, with the integrity unperturbed by disruption of the global structure of the visual stream. Disruption of memory was observed only when foils were drawn from embedded segments of duration less than 5 s, an outcome consistent with the view that memory at these shortest durations are consolidated with expectations drawn from the previous stream.
Collapse
Affiliation(s)
- Ryan Ferguson
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| | - Donald Homa
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| | - Derek Ellis
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
7
|
Visual statistical learning of temporal structures at different hierarchical levels. Atten Percept Psychophys 2016; 78:1308-23. [PMID: 27068052 DOI: 10.3758/s13414-016-1104-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual environments are complex. In order to process the complex information provided by visual environments, the visual system adopts strategies to reduce its complexity. One strategy, called visual statistical learning, or VSL, is to extract the statistical regularities from the environment. Another strategy is to use the hierarchical structure of a scene (e.g., the co-occurrence between local and global information). Through a series of experiments, this study investigated whether the utilization of the statistical regularities and the hierarchical structure could work together to reduce the complexity of a scene. In the familiarization phase, the participants were asked to passively view a stream of hierarchical scenes where the shapes were concurrently presented at the local and global levels. At each of the two levels there were temporal regularities among the three shapes, which always appeared in the same order. In the test phase, the participants judged the familiarity between 2 triplets, whose temporal regularities were either preserved or not. We found that the participants extracted the temporal regularities at each of the local and global levels (Experiment 1). The hierarchical structure influenced the ability to extract the temporal regularities (Experiment 2). Specifically, VSL was either enhanced or impaired depending on whether the hierarchical structure was informative or not. In summary, in order to process a complex scene, the visual system flexibly uses statistical regularities and the hierarchical structure of the scene.
Collapse
|
8
|
Schoth D, Godwin H, Liversedge S, Liossi C. Eye movements during visual search for emotional faces in individuals with chronic headache. Eur J Pain 2014; 19:722-32. [DOI: 10.1002/ejp.595] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/07/2014] [Indexed: 11/10/2022]
Affiliation(s)
- D.E. Schoth
- Academic Unit of Psychology; University of Southampton; UK
| | - H.J. Godwin
- Academic Unit of Psychology; University of Southampton; UK
| | | | - C. Liossi
- Academic Unit of Psychology; University of Southampton; UK
| |
Collapse
|
9
|
Konkle T, Brady TF, Alvarez GA, Oliva A. Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychol Sci 2010; 21:1551-6. [PMID: 20921574 DOI: 10.1177/0956797610385359] [Citation(s) in RCA: 187] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Observers can store thousands of object images in visual long-term memory with high fidelity, but the fidelity of scene representations in long-term memory is not known. Here, we probed scene-representation fidelity by varying the number of studied exemplars in different scene categories and testing memory using exemplar-level foils. Observers viewed thousands of scenes over 5.5 hr and then completed a series of forced-choice tests. Memory performance was high, even with up to 64 scenes from the same category in memory. Moreover, there was only a 2% decrease in accuracy for each doubling of the number of studied scene exemplars. Surprisingly, this degree of categorical interference was similar to the degree previously demonstrated for object memory. Thus, although scenes have often been defined as a superset of objects, our results suggest that scenes and objects may be entities at a similar level of abstraction in visual long-term memory.
Collapse
Affiliation(s)
- Talia Konkle
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
10
|
Brockmole JR, Hambrick DZ, Windisch DJ, Henderson JM. The role of meaning in contextual cueing: evidence from chess expertise. Q J Exp Psychol (Hove) 2009; 61:1886-96. [PMID: 18609364 DOI: 10.1080/17470210701781155] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context-target associations.
Collapse
|
11
|
Hollingworth A. Visual memory for natural scenes: Evidence from change detection and visual search. VISUAL COGNITION 2006. [DOI: 10.1080/13506280500193818] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
12
|
Hollingworth A. The relationship between online visual representation of a scene and long-term scene memory. J Exp Psychol Learn Mem Cogn 2005; 31:396-411. [PMID: 15910127 DOI: 10.1037/0278-7393.31.3.396] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust.
Collapse
Affiliation(s)
- Andrew Hollingworth
- Department of Psychology, The University of Iowa, Iowa City, IA 52242-1407, USA.
| |
Collapse
|