1
|
Wang V, Ongchoco JDK, Scholl BJ. Here it comes: Active forgetting triggered even just by anticipation of an impending event boundary. Psychon Bull Rev 2023; 30:1917-1927. [PMID: 37079173 DOI: 10.3758/s13423-023-02278-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2023] [Indexed: 04/21/2023]
Abstract
Visual input arrives in a continuous stream, but we often experience the world as a sequence of discrete events - and the boundaries between events have important consequences for our mental lives. Perhaps the best example of this is that memory not only declines as a function of elapsed time, but is also impaired when crossing an event boundary - as when walking through a doorway. (This impairment may be adaptive, as when one "flushes" a cache in a computer program when completing a function.) But when exactly does this impairment occur? Existing work has not asked this question: based on a reasonable assumption that forgetting occurs when we cross event boundaries, memory has only been tested after this point. Here we demonstrate that even visual cues to an impending event boundary (that one has not yet crossed) suffice to trigger forgetting. Subjects viewed an immersive animation that simulated walking through a room. Before their walk, they saw a list of pseudo-words, and immediately after their walk, their recognition memory was tested. During their walk, some subjects passed through a doorway, while others did not (equating time and distance traveled). Memory was impaired (relative to the "no doorway" condition) not only when they passed through the doorway, but also when they were tested just before they would have crossed the doorway. Additional controls confirmed that this was due to the anticipation of event boundaries (rather than differential surprise or visual complexity). Visual processing may proactively "flush" memory to some degree in preparation for future events.
Collapse
Affiliation(s)
- Vivian Wang
- Department of Psychology, Yale University, Box 208205, New Haven, CT, 06520-8205, USA
| | | | - Brian J Scholl
- Department of Psychology, Yale University, Box 208205, New Haven, CT, 06520-8205, USA.
| |
Collapse
|
2
|
Singh L, Göksun T, Hirsh-Pasek K, Golinkoff RM. Sensitivity to visual cues within motion events in monolingual and bilingual infants. J Exp Child Psychol 2023; 227:105582. [PMID: 36375314 DOI: 10.1016/j.jecp.2022.105582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 10/18/2022] [Accepted: 10/18/2022] [Indexed: 11/13/2022]
Abstract
It is well known that infants undergo developmental change in how they respond to language-relevant visual contrasts. For example, when viewing motion events, infants' sensitivities to background information ("ground-path cues," e.g., whether a background is flat and continuous or bounded) change with age. Prior studies with English and Japanese monolingual infants have demonstrated that 14-month-old infants discriminate between motion events that take place against different ground-paths (e.g., an unbounded field vs a bounded street). By 19 months of age, this sensitivity becomes more selective in monolingual infants; only learners of languages that lexically contrast these categories, such as Japanese, discriminate between such events. In this study, we investigated this progression in bilingual infants. We first replicated past reports of an age-related decline in ground-path sensitivity from 14 to 19 months in English monolingual infants living in a multilingual society. English-Mandarin bilingual infants living in that same society were then tested on discrimination of ground-path cues at 14, 19, and 24 months. Although neither the English nor Mandarin language differentiates motion events based on ground-path cues, bilingual infants demonstrated protracted sensitivity to these cues. Infants exhibited a lack of discrimination at 14 months, followed by discrimination at 19 months and a subsequent decline in discrimination at 24 months. In addition, bilingual infants demonstrated more fine-grained sensitivities to subtle ground cues not observed in monolingual infants.
Collapse
Affiliation(s)
- Leher Singh
- Department of Psychology, National University of Singapore, Singapore 117570, Singapore.
| | - Tilbe Göksun
- Department of Psychology, Koç University, 34450 Sarıyer/Istanbul, Turkey
| | - Kathy Hirsh-Pasek
- Department of Psychology, Temple University, Philadelphia, PA 19122, USA
| | | |
Collapse
|
3
|
Ongchoco JDK, Scholl BJ. Scaffolded attention in time: 'Everyday hallucinations' of rhythmic patterns from regular auditory beats. Atten Percept Psychophys 2021. [PMID: 34939165 DOI: 10.3758/s13414-021-02409-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2021] [Indexed: 11/08/2022]
Abstract
A regular grid (e.g. on a piece of graph paper) is made up of squares which (by definition) have no structure. When people stare at such a grid, however, they may nevertheless see a shifting array of structured patterns such as lines, crosses, or even block-letters - something that doesn't occur when staring at a blank page. This is the phenomenon of scaffolded attention, and recent work has demonstrated that this involves the creation of bona fide object representations (e.g. that enjoy 'same-object advantages'). Is this an intrinsically visuospatial phenomenon, or might it rather reflect a much more general effect of perceiving structure from regular scaffolds, which could also occur in other dimensions or modalities? Here we show for the first time that there is also robust scaffolded attention in time: a regular series of tones (as might come from a metronome) has no structure beyond the 'beats' themselves, but people nevertheless hear a shifting array of structured rhythms - a phenomenon that doesn't occur when listening to silence. We demonstrate (in tests of temporal 'same-event advantages') that this (entirely internal) process gives rise to bona fide event representations. Thus the relationship between attention and events is bidirectional: event structure can guide attention, but attention can also create event structure in the first place. In this way we show how 'everyday hallucinations' of rhythmic patterns can arise in the absence of explicit sensory structure.
Collapse
|
4
|
Xu H, Pan JS, Wang XM, Bingham GP. Information for perceiving blurry events: Optic flow and color are additive. Atten Percept Psychophys 2021; 83:389-98. [PMID: 33000441 DOI: 10.3758/s13414-020-02135-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2020] [Indexed: 11/08/2022]
Abstract
Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.
Collapse
|
5
|
Heffner CC, Newman RS, Idsardi WJ. Action at a distance: Long-distance rate adaptation in event perception. Q J Exp Psychol (Hove) 2020; 74:312-325. [PMID: 32988312 DOI: 10.1177/1747021820959756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Viewers' perception of actions is coloured by the context in which those actions are found. An action that seems uncomfortably sudden in one context might seem expeditious in another. In this study, we examined the influence of one type of context: the rate at which an action is being performed. Based on parallel findings in other modalities, we anticipated that viewers would adapt to the rate at which actions were displayed at. Viewers watched a series of actions performed on a touchscreen that could end in actions that were ambiguous to their number (e.g., two separate "tap" actions versus a single "double tap" action) or identity (e.g., a "swipe" action versus a slower "drag"). In Experiment 1, the rate of actions themselves was manipulated; participants used the rate of the actions to distinguish between two similar, related actions. In Experiment 2, the rate of the actions that preceded the ambiguous one was sped up or slowed down. In line with our hypotheses, viewers perceived the identity of those final actions with reference to the rate of the preceding actions. This was true even in Experiment 3, when the action immediately before the ambiguous one was left unmodified. Ambiguous actions embedded in a fast context were seen as relatively long, while ambiguous actions embedded in a slow context were seen as relatively short. This shows that viewers adapt to the rate of actions when perceiving visual events.
Collapse
Affiliation(s)
- Christopher C Heffner
- Program in Neuroscience & Cognitive Science, University of Maryland, College Park, MD, USA.,Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA.,Department of Linguistics, University of Maryland, College Park, MD, USA.,Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, USA
| | - Rochelle S Newman
- Program in Neuroscience & Cognitive Science, University of Maryland, College Park, MD, USA.,Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - William J Idsardi
- Program in Neuroscience & Cognitive Science, University of Maryland, College Park, MD, USA.,Department of Linguistics, University of Maryland, College Park, MD, USA
| |
Collapse
|
6
|
Swallow KM, Wang Q. Culture influences how people divide continuous sensory experience into events. Cognition 2020; 205:104450. [PMID: 32927384 DOI: 10.1016/j.cognition.2020.104450] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 08/13/2020] [Accepted: 08/26/2020] [Indexed: 10/23/2022]
Abstract
Everyday experience is divided into meaningful events as a part of human perception. Current accounts of this process, known as event segmentation, focus on how characteristics of the experience (e.g., situation changes) influence segmentation. However, characteristics of the viewers themselves have been largely neglected. We test whether one such viewer characteristic, their cultural background, impacts online event segmentation. Culture could impact event segmentation (1) by emphasizing different aspects of experiences as being important for comprehension, memory, and communication, and (2) by providing different exemplars of how everyday activities are performed, which objects are likely to be used, and how scenes are laid out. Indian and US viewers (N = 152) identified events in everyday activities (e.g., making coffee) recorded in Indian and US settings. Consistent with their cultural preference for analytical processing, US viewers segmented the activities into more events than did Indian viewers. Furthermore, event boundaries identified by US viewers were more strongly associated with visual changes, whereas boundaries identified by Indian viewers were more strongly associated with goal changes. There was no evidence that familiarity with an activity impacted segmentation. Thus, culture impacts event perception by altering the types of information people prioritize when dividing experience into meaningful events.
Collapse
Affiliation(s)
- Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA.
| | - Qi Wang
- Department of Human Development, Cornell University, Ithaca, NY, USA
| |
Collapse
|
7
|
Smith ME, Newberry KM, Bailey HR. Differential effects of knowledge and aging on the encoding and retrieval of everyday activities. Cognition 2020; 196:104159. [PMID: 31865171 PMCID: PMC7028520 DOI: 10.1016/j.cognition.2019.104159] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 12/13/2019] [Accepted: 12/16/2019] [Indexed: 10/25/2022]
Abstract
We deconstruct continuous streams of action into smaller, meaningful events. Research has shown that the ability to segment continuous activity into such events and remember their contents declines with age; however, knowledge improves with age. We investigated how young and older adults use knowledge to more efficiently encode and later remember information from everyday events by having participants view a series of self-paced slideshows depicting everyday activities. For some activities, older adults produce more normative scripts than do young adults (older adult activities) and for other activities, young adults produce more normative scripts than do older adults (young adult activities). Overall, participants viewed event boundaries longer than within events (i.e., the event boundary advantage) replicating prior research (e.g., Hard, Recchia, & Tversky, 2011). Importantly, older adults demonstrated the boundary advantage for the older adult activities but not the young adult activities, and they also had better recognition memory for the older adult activities than the young adult activities. We also found that the magnitude of a participant's boundary advantage was associated with better memory, but only for the less knowledgeable activities. Results indicate that older adults use their intact knowledge to better encode and remember everyday activities, but that knowledge and event segmentation may have independent influences on event memory.
Collapse
|
8
|
Loschky LC, Larson AM, Smith TJ, Magliano JP. The Scene Perception & Event Comprehension Theory (SPECT) Applied to Visual Narratives. Top Cogn Sci 2019; 12:311-351. [PMID: 31486277 PMCID: PMC9328418 DOI: 10.1111/tops.12455] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/05/2019] [Accepted: 08/05/2019] [Indexed: 11/29/2022]
Abstract
Understanding how people comprehend visual narratives (including picture stories, comics, and film) requires the combination of traditionally separate theories that span the initial sensory and perceptual processing of complex visual scenes, the perception of events over time, and comprehension of narratives. Existing piecemeal approaches fail to capture the interplay between these levels of processing. Here, we propose the Scene Perception & Event Comprehension Theory (SPECT), as applied to visual narratives, which distinguishes between front‐end and back‐end cognitive processes. Front‐end processes occur during single eye fixations and are comprised of attentional selection and information extraction. Back‐end processes occur across multiple fixations and support the construction of event models, which reflect understanding of what is happening now in a narrative (stored in working memory) and over the course of the entire narrative (stored in long‐term episodic memory). We describe relationships between front‐ and back‐end processes, and medium‐specific differences that likely produce variation in front‐end and back‐end processes across media (e.g., picture stories vs. film). We describe several novel research questions derived from SPECT that we have explored. By addressing these questions, we provide greater insight into how attention, information extraction, and event model processes are dynamically coordinated to perceive and understand complex naturalistic visual events in narratives and the real world. Comprehension of visual narratives like comics, picture stories, and films involves both decoding the visual content and construing the meaningful events they represent. The Scene Perception & Event Comprehension Theory (SPECT) proposes a framework for understanding how a comprehender perceptually negotiates the surface of a visual representation and integrates its meaning into a growing mental model.
Collapse
Affiliation(s)
| | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
9
|
Ongchoco JDK, Scholl BJ. Did that just happen? Event segmentation influences enumeration and working memory for simple overlapping visual events. Cognition 2019; 187:188-197. [PMID: 30897509 DOI: 10.1016/j.cognition.2019.01.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 01/02/2019] [Accepted: 01/03/2019] [Indexed: 11/25/2022]
Abstract
For working memory to be efficient, it is important not only to remember, but also to forget-thus freeing up memory for additional information. But what triggers forgetting? Beyond continuous temporal decay, memory is thought to be effectively 'flushed' to some degree at discrete event boundaries-i.e. when one event ends and another begins. But this framework does not readily apply to real-world visual experience, where events are constantly and asynchronously beginning, unfolding, and ending all around us. In this rush of things always happening, when might memory be flushed? In a series of experiments, we explored this using maximally simple visual events. A number of dots appeared, a subset moved at random speeds in random directions, and observers simply had to estimate the number of dots that moved. Critically, however, these motions could begin and end asynchronously. In general, asynchronous motions led to underestimation, but further experiments demonstrated that this was driven only by endings: regardless of whether dots started moving together or separately, animations with asynchronous endings led to underestimation-even while carefully controlling for both the overall amount of motion and average starting and ending times. (In contrast, no such effect occurred for asynchronous beginnings.) Thus, the ends of events seem to have an outsize influence on working memory-but only in the context of other ongoing events: once a motion ends amidst other unfinished motions, it seems more difficult to recall that particular motion as having occurred as a distinct event.
Collapse
|
10
|
Hafri A, Trueswell JC, Strickland B. Encoding of event roles from visual scenes is rapid, spontaneous, and interacts with higher-level visual processing. Cognition 2018; 175:36-52. [PMID: 29459238 DOI: 10.1016/j.cognition.2018.02.011] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 02/06/2018] [Accepted: 02/08/2018] [Indexed: 11/24/2022]
Abstract
A crucial component of event recognition is understanding event roles, i.e. who acted on whom: boy hitting girl is different from girl hitting boy. We often categorize Agents (i.e. the actor) and Patients (i.e. the one acted upon) from visual input, but do we rapidly and spontaneously encode such roles even when our attention is otherwise occupied? In three experiments, participants observed a continuous sequence of two-person scenes and had to search for a target actor in each (the male/female or red/blue-shirted actor) by indicating with a button press whether the target appeared on the left or the right. Critically, although role was orthogonal to gender and shirt color, and was never explicitly mentioned, participants responded more slowly when the target's role switched from trial to trial (e.g., the male went from being the Patient to the Agent). In a final experiment, we demonstrated that this effect cannot be fully explained by differences in posture associated with Agents and Patients. Our results suggest that extraction of event structure from visual scenes is rapid and spontaneous.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychology, University of Pennsylvania, 425 S. University Avenue, Philadelphia, PA 19104, USA.
| | - John C Trueswell
- Department of Psychology, University of Pennsylvania, 425 S. University Avenue, Philadelphia, PA 19104, USA
| | - Brent Strickland
- Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL Research University, Institut Jean Nicod, (ENS, EHESS, CNRS), 75005 Paris, France
| |
Collapse
|
11
|
Brockhoff A, Huff M, Maurer A, Papenmeier F. Seeing the unseen? Illusory causal filling in FIFA referees, players, and novices. Cogn Res Princ Implic 2017; 1:7. [PMID: 28180158 PMCID: PMC5256435 DOI: 10.1186/s41235-016-0008-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 07/27/2016] [Indexed: 11/28/2022]
Abstract
Humans often falsely report having seen a causal link between two dynamic scenes if the second scene depicts a valid logical consequence of the initial scene. As an example, a video clip shows someone kicking a ball including the ball flying. Even if the video clip omitted the moment of contact (i.e., the causal link), participants falsely report having seen this moment. In the current study, we explored the interplay of cognitive-perceptual expertise and event perception by measuring the false-alarm rates of three groups with differing interests in football (soccer in North America) (novices, players, and FIFA referees). We used the event-completion paradigm with video footage of a real football match, presenting either complete clips or incomplete clips (i.e., with the contact moment omitted). Either a causally linked scene or an incoherent scene followed a cut in the incomplete videos. Causally linked scenes induced false recognitions in all three groups: although the ball contact moment was not presented, participants indicated that they had seen the contact as frequently when it was absent as in the complete condition. In a second experiment, we asked the novices to detect the ball contact moment when it was either visible or not and when it was either followed by a causally or non-causally linked scene. Here, instead of presenting pictures of the clip, the participants were give a two-alternative forced-choice task: “Yes, contact was visible”, or “No, contact was not visible”. The results of Experiment 1 indicate that conceptual interpretations of simple events are independent of expertise: there were no top-down effects on perception. Participants in Experiment 2 detected the ball contact moment significantly more often correctly in the non-causal than in the causal conditions, indicating that the effect observed in Experiment 1 was not due to a possibly influential design (e.g., inducing a false memory for the presented pictures). The theoretical as well as the practical implications are discussed.
Collapse
Affiliation(s)
- Alisa Brockhoff
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Markus Huff
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Annika Maurer
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Frank Papenmeier
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| |
Collapse
|
12
|
Song L, Pruden SM, Golinkoff RM, Hirsh-Pasek K. Prelinguistic foundations of verb learning: Infants discriminate and categorize dynamic human actions. J Exp Child Psychol 2016; 151:77-95. [PMID: 26968395 PMCID: PMC5017891 DOI: 10.1016/j.jecp.2016.01.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 12/21/2015] [Accepted: 01/06/2016] [Indexed: 11/22/2022]
Abstract
Action categorization is necessary for human cognition and is foundational to learning verbs, which label categories of actions and events. In two studies using a nonlinguistic preferential looking paradigm, 10- to 12-month-old English-learning infants were tested on their ability to discriminate and categorize a dynamic human manner of motion (i.e., way in which a figure moves; e.g., marching). Study 1 results reveal that infants can discriminate a change in path and actor across instances of the same manner of motion. Study 2 results suggest that infants categorize the manner of motion for dynamic human events even under conditions in which other components of the event change, including the actor's path and the actor. Together, these two studies extend prior research on infant action categorization of animated motion events by providing evidence that infants can categorize dynamic human actions, a skill foundational to the learning of motion verbs.
Collapse
Affiliation(s)
- Lulu Song
- Brooklyn College, The City University of New York, Brooklyn, NY 11210, USA.
| | | | | | | |
Collapse
|
13
|
Libertus K, Greif ML, Needham AW, Pelphrey K. Infants' observation of tool-use events over the first year of life. J Exp Child Psychol 2016; 152:123-135. [PMID: 27522041 DOI: 10.1016/j.jecp.2016.07.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 07/06/2016] [Accepted: 07/07/2016] [Indexed: 11/16/2022]
Abstract
How infants observe a goal-directed instrumental action provides a unique window into their understanding of others' behavior. In this study, we investigated eye-gaze patterns while infants observed events in which an actor used a tool on an object. Comparisons among 4-, 7-, 10-, and 12-month-old infants and adults reveal changes in infants' looking patterns with age; following an initial face bias, infants' scan path eventually shows a dynamic integration of both the actor's face and the objects on which they act. This shift may mark a transition in infants' understanding of the critical components of tool-use events and their understanding of others' behavior.
Collapse
Affiliation(s)
- Klaus Libertus
- Department of Psychology and Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | | - Amy Work Needham
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN 37203, USA
| | - Kevin Pelphrey
- Autism and Neurodevelopmental Disorders Institute, The George Washington University and Children's National Health System, Washington, DC 20037, USA
| |
Collapse
|
14
|
Zacks JM, Kurby CA, Landazabal CS, Krueger F, Grafman J. Effects of penetrating traumatic brain injury on event segmentation and memory. Cortex 2015; 74:233-46. [PMID: 26704077 DOI: 10.1016/j.cortex.2015.11.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Revised: 10/01/2015] [Accepted: 11/02/2015] [Indexed: 10/22/2022]
Abstract
Penetrating traumatic brain injury (pTBI) is associated with deficits in cognitive tasks including comprehension and memory, and also with impairments in tasks of daily living. In naturalistic settings, one important component of cognitive task performance is event segmentation, the ability to parse the ongoing stream of behavior into meaningful units. Event segmentation ability is associated with memory performance and with action control, but is not well assessed by standard neuropsychological assessments or laboratory tasks. Here, we measured event segmentation and memory in a sample of 123 male military veterans aged 59-81 who had suffered a traumatic brain injury as young men, and 34 demographically similar controls. Participants watched movies of everyday activities and segmented them to identify fine-grained or coarse-grained events, and then completed tests of recognition memory for pictures from the movies and of memory for the temporal order of actions in the movies. Lesion location and volume were assessed with computed tomography (CT) imaging. Patients with traumatic brain injury were impaired on event segmentation. Those with larger lesions had larger impairments for fine segmentation and also impairments for both memory measures. Further, the degree of memory impairment was statistically mediated by the degree of event segmentation impairment. There was some evidence that lesions to the ventromedial prefrontal cortex (vmPFC) selectively impaired coarse segmentation; however, lesions outside of a priori regions of interest also were associated with impaired segmentation. One possibility is that the effect of vmPFC damage reflects the role of prefrontal event knowledge representations in ongoing comprehension. These results suggest that assessment of naturalistic event comprehension can be a valuable component of cognitive assessment in cases of traumatic brain injury, and that interventions aimed at event segmentation could be clinically helpful.
Collapse
Affiliation(s)
| | - Christopher A Kurby
- Washington University, Saint Louis, MO, USA; Grand Valley State University, Allendale, MI, USA
| | | | | | - Jordan Grafman
- Rehabilitation Institute of Chicago, Chicago, IL, USA; Northwestern University, Chicago, IL, USA
| |
Collapse
|
15
|
Abstract
Research exploring visual attention has demonstrated that people are aware of only a small proportion of visual properties, and that people only track these properties over a subset of moments in time. This makes it critical to understand how our perceptual system leverages its limited capacity, such that properties are tracked across views only when they can support an understanding of meaningful events. In this paper, we propose that relational triggers induce between-view property comparisons when spatial relationships between objects appear inconsistent across views-moments that are particularly likely to mark the beginning of meaningful events. In these experiments, we activate relational triggers by violating heuristics that filmmakers use to create visuospatial continuity across views. We find that these violations increase change detection when they coincide with visual property changes, demonstrating that relational triggers induce a comparison of properties held in working memory. We also demonstrate that relational triggers increase the likelihood of event segmentation, and that change detection increases both in response to triggers and natural event boundaries. We propose that relational triggers are an effective heuristic cue that facilitates the comparison of properties when they are likely to be useful during event perception.
Collapse
|