1
|
Malaie S, Spivey MJ, Marghetis T. Divergent and Convergent Creativity Are Different Kinds of Foraging. Psychol Sci 2024:9567976241245695. [PMID: 38713456 DOI: 10.1177/09567976241245695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024] Open
Abstract
According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities recycle evolutionarily ancient mechanisms for perception and action. Here, building on these accounts, we investigate whether creativity builds on our capacity to forage in space ("creativity as strategic foraging"). We report systematic connections between specific forms of creative thinking-divergent and convergent-and corresponding strategies for searching in space. U.S. American adults completed two tasks designed to measure creativity. Before each creativity trial, participants completed an unrelated search of a city map. Between subjects, we manipulated the search pattern, with some participants seeking multiple, dispersed spatial locations and others repeatedly converging on the same location. Participants who searched divergently in space were better at divergent thinking but worse at convergent thinking; this pattern reversed for participants who had converged repeatedly on a single location. These results demonstrate a targeted link between foraging and creativity, thus advancing our understanding of the origins and mechanisms of high-level cognition.
Collapse
Affiliation(s)
- Soran Malaie
- Department of Cognitive and Information Sciences, University of California-Merced
| | - Michael J Spivey
- Department of Cognitive and Information Sciences, University of California-Merced
| | - Tyler Marghetis
- Department of Cognitive and Information Sciences, University of California-Merced
| |
Collapse
|
2
|
Ryskin RA, Spivey MJ. Toward sophisticated models of naturalistic language behavior Comment on "Beyond Simple Laboratory Studies" by A. Maselli et al. Phys Life Rev 2023; 47:191-194. [PMID: 37926021 DOI: 10.1016/j.plrev.2023.10.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 10/18/2023] [Indexed: 11/07/2023]
Affiliation(s)
- Rachel A Ryskin
- Department of Cognitive & Information Sciences, University of California, Merced, USA
| | - Michael J Spivey
- Department of Cognitive & Information Sciences, University of California, Merced, USA.
| |
Collapse
|
3
|
Malaie S, Karimi H, Jahanitabesh A, Bargh JA, Spivey MJ. Concepts in Space: Enhancing Lexical Search With a Spatial Diversity Prime. Cogn Sci 2023; 47:e13327. [PMID: 37534377 DOI: 10.1111/cogs.13327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 06/16/2023] [Accepted: 07/18/2023] [Indexed: 08/04/2023]
Abstract
Informed by theories of embodied cognition, in the present study, we designed a novel priming technique to investigate the impact of spatial diversity and script direction on searching through concepts in both English and Persian (i.e., two languages with opposite script directions). First, participants connected a target dot either to one other dot (linear condition) or to multiple other dots (diverse condition) and either from left to right (rightward condition) or from right to left (leftward condition) on a computer touchscreen using their dominant hand's forefinger. Following the spatial prime, they were asked to generate as many words as possible using two-letter cues (e.g., "lo" → "love," "lobster") in 20 s. We hypothesized that greater spatial diversity, and consistency with script direction, should facilitate conceptual search and result in a higher number of word productions. In both languages, word production performance was superior for the diverse prime relative to the linear prime, suggesting that searching through lexical memory is facilitated by spatial diversity. Although some effects were observed for the directionality of the spatial prime, they were not consistent across experiments and did not correlate with script direction. This pattern of results suggests that a spatial prime that promotes diverse paths can improve word retrieval from lexical memory and lends empirical support to the embodied cognition framework, in which spatial relations play a crucial role in the conceptual system.
Collapse
Affiliation(s)
- Soran Malaie
- Cognitive and Information Sciences, University of California, Merced
| | | | | | | | - Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced
| |
Collapse
|
4
|
Abstract
Despite its many twists and turns, the arc of cognitive science generally bends toward progress, thanks to its interdisciplinary nature. By glancing at the last few decades of experimental and computational advances, it can be argued that-far from failing to converge on a shared set of conceptual assumptions-the field is indeed making steady consensual progress toward what can broadly be referred to as interactive frameworks. This inclination is apparent in the subfields of psycholinguistics, visual perception, embodied cognition, extended cognition, neural networks, dynamical systems theory, and more. This pictorial essay briefly documents this steady progress both from a bird's eye view and from the trenches. The conclusion is one of optimism that cognitive science is getting there, albeit slowly and arduously, like any good science should.
Collapse
Affiliation(s)
- Michael J Spivey
- Department of Cognitive and Information Sciences, University of California, Merced
| |
Collapse
|
5
|
Falandays JB, Nguyen B, Spivey MJ. Is prediction nothing more than multi-scale pattern completion of the future? Brain Res 2021; 1768:147578. [PMID: 34284021 DOI: 10.1016/j.brainres.2021.147578] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/28/2021] [Accepted: 06/29/2021] [Indexed: 11/18/2022]
Abstract
While the notion of the brain as a prediction machine has been extremely influential and productive in cognitive science, there are competing accounts of how best to model and understand the predictive capabilities of brains. One prominent framework is of a "Bayesian brain" that explicitly generates predictions and uses resultant errors to guide adaptation. We suggest that the prediction-generation component of this framework may involve little more than a pattern completion process. We first describe pattern completion in the domain of visual perception, highlighting its temporal extension, and show how this can entail a form of prediction in time. Next, we describe the forward momentum of entrained dynamical systems as a model for the emergence of predictive processing in non-predictive systems. Then, we apply this reasoning to the domain of language, where explicitly predictive models are perhaps most popular. Here, we demonstrate how a connectionist model, TRACE, exhibits hallmarks of predictive processing without any representations of predictions or errors. Finally, we present a novel neural network model, inspired by reservoir computing models, that is entirely unsupervised and memoryless, but nonetheless exhibits prediction-like behavior in its pursuit of homeostasis. These explorations demonstrate that brain-like systems can get prediction "for free," without the need to posit formal logical representations with Bayesian probabilities or an inference machine that holds them in working memory.
Collapse
Affiliation(s)
- J Benjamin Falandays
- Department of Cognitive and Information Sciences, University of California, Merced, United States
| | - Benjamin Nguyen
- Department of Cognitive and Information Sciences, University of California, Merced, United States
| | - Michael J Spivey
- Department of Cognitive and Information Sciences, University of California, Merced, United States.
| |
Collapse
|
6
|
Falandays JB, Spivey MJ. Abstract meanings may be more dynamic, due to their sociality: Comment on "Words as social tools: Language, sociality and inner grounding in abstract concepts" by Anna M. Borghi et al. Phys Life Rev 2019; 29:175-177. [PMID: 30857866 DOI: 10.1016/j.plrev.2019.02.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 02/27/2019] [Indexed: 10/27/2022]
Affiliation(s)
- J Ben Falandays
- Cognitive and Information Sciences, University of California, Merced, United States of America
| | - Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, United States of America.
| |
Collapse
|
7
|
|
8
|
Spivey MJ, Batzloff BJ. Bridgemanian space constancy as a precursor to extended cognition. Conscious Cogn 2018; 64:164-175. [PMID: 29709438 DOI: 10.1016/j.concog.2018.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/23/2018] [Accepted: 04/02/2018] [Indexed: 11/30/2022]
Abstract
A few decades ago, cognitive psychologists generally took for granted that the reason we perceive our visual environment as one contiguous stable whole (i.e., space constancy) is because we have an internal mental representation of the visual environment as one contiguous stable whole. They supposed that the non-contiguous visual images that are gathered during the brief fixations that intervene between pairs of saccadic eye movements (a few times every second) are somehow stitched together to construct this contiguous internal mental representation. Determining how exactly the brain does this proved to be a vexing puzzle for vision researchers. Bruce Bridgeman's research career is the story of how meticulous psychophysical experimentation, and a genius theoretical insight, eventually solved this puzzle. The reason that it was so difficult for researchers to figure out how the brain stitches together these visual snapshots into one accurately-rendered mental representation of the visual environment is that it doesn't do that. Bruce discovered that the brain couldn't do that if it tried. The neural information that codes for saccade amplitude and direction is simply too inaccurate to determine exact relative locations of each fixation. Rather than the perception of space constancy being the result of an internal representation, Bruce determined that it is the result of a brain that simply assumes that external space remains constant, and it rarely checks to verify this assumption. In our extension of Bridgeman's formulation, we suggest that objects in the world often serve as their own representations, and cognitive operations can be performed on those objects themselves, rather than on mental representations of them.
Collapse
Affiliation(s)
- Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, United States.
| | - Brandon J Batzloff
- Cognitive and Information Sciences, University of California, Merced, United States
| |
Collapse
|
9
|
Gordon CL, Spivey MJ, Balasubramaniam R. Corticospinal excitability during the processing of handwritten and typed words and non-words. Neurosci Lett 2017; 651:232-236. [PMID: 28504121 DOI: 10.1016/j.neulet.2017.05.021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Revised: 05/06/2017] [Accepted: 05/10/2017] [Indexed: 11/30/2022]
Abstract
A number of studies have suggested that perception of actions is accompanied by motor simulation of those actions. To further explore this proposal, we applied Transcranial magnetic stimulation (TMS) to the left primary motor cortex during the observation of handwritten and typed language stimuli, including words and non-word consonant clusters. We recorded motor-evoked potentials (MEPs) from the right first dorsal interosseous (FDI) muscle to measure cortico-spinal excitability during written text perception. We observed a facilitation in MEPs for handwritten stimuli, regardless of whether the stimuli were words or non-words, suggesting potential motor simulation during observation. We did not observe a similar facilitation for the typed stimuli, suggesting that motor simulation was not occurring during observation of typed text. By demonstrating potential simulation of written language text during observation, these findings add to a growing literature suggesting that the motor system plays a strong role in the perception of written language.
Collapse
Affiliation(s)
- Chelsea L Gordon
- Cognitive & Information Sciences, University of California, Merced, United States
| | - Michael J Spivey
- Cognitive & Information Sciences, University of California, Merced, United States
| | | |
Collapse
|
10
|
|
11
|
Abstract
Real-time cognition is best described not as a sequence of logical operations performed on discrete symbols but as a continuously changing pattern of neuronal activity. The continuity in these dynamics indicates that, in between describable states of mind, mental activity does not lend itself to the linguistic labels relied on by much of psychology. We discuss eye-tracking and mouse-tracking evidence for this temporal continuity and provide geometric visualizations of mental activity, depicting it as a continuous trajectory through a state space (a multidimensional space in which locations correspond to mental states). When the state of the system travels toward a frequently visited region of that space, the destination may constitute recognition of a particular word or a particular object; but on the way there, the majority of the mental trajectory is in intermediate regions of that space, revealing graded mixtures of mental states.
Collapse
|
12
|
Abstract
Bilingualism provides a unique opportunity for exploring hypotheses about how the human brain encodes language. For example, the “input switch” theory states that bilinguals can deactivate one language module while using the other. A new measure of spoken language comprehension, headband-mounted eyetracking, allows a firm test of this theory. When given spoken instructions to pick up an object, in a monolingual session, late bilinguals looked briefly at a distractor object whose name in the irrelevant language was initially phonetically similar to the spoken word more often than they looked at a control distractor object. This result indicates some overlap between the two languages in bilinguals, and provides support for parallel, interactive accounts of spoken word recognition in general.
Collapse
|
13
|
Livins KA, Doumas LAA, Spivey MJ. Shaping relations: Exploiting relational features for visuospatial priming. J Exp Psychol Learn Mem Cogn 2016; 42:127-39. [DOI: 10.1037/xlm0000149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Livins KA, Spivey MJ, Doumas LAA. Varying variation: the effects of within- versus across-feature differences on relational category learning. Front Psychol 2015; 6:129. [PMID: 25709595 PMCID: PMC4321646 DOI: 10.3389/fpsyg.2015.00129] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 01/25/2015] [Indexed: 11/13/2022] Open
Abstract
Learning of feature-based categories is known to interact with feature-variation in a variety of ways, depending on the type of variation (e.g., Markman and Maddox, 2003). However, relational categories are distinct from feature-based categories in that they determine membership based on structural similarities. As a result, the way that they interact with feature variation is unclear. This paper explores both experimental and computational data and argues that, despite its reliance on structural factors, relational category-learning should still be affected by the type of feature variation present during the learning process. It specifically suggests that within-feature and across-feature variation should produce different learning trajectories due to a difference in representational cost. The paper then uses the DORA model (Doumas et al., 2008) to discuss how this account might function in a cognitive system before presenting an experiment aimed at testing this account. The experiment was a relational category-learning task and was run on human participants and then simulated in DORA. Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation. These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.
Collapse
Affiliation(s)
- Katherine A Livins
- Department of Cognitive Science, University of California, Merced, Merced, CA USA
| | - Michael J Spivey
- Department of Cognitive Science, University of California, Merced, Merced, CA USA
| | - Leonidas A A Doumas
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh UK
| |
Collapse
|
15
|
Rigoli LM, Holman D, Spivey MJ, Kello CT. Spectral convergence in tapping and physiological fluctuations: coupling and independence of 1/f noise in the central and autonomic nervous systems. Front Hum Neurosci 2014; 8:713. [PMID: 25309389 PMCID: PMC4160925 DOI: 10.3389/fnhum.2014.00713] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 08/26/2014] [Indexed: 12/02/2022] Open
Abstract
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior.
Collapse
Affiliation(s)
- Lillian M Rigoli
- Cognitive and Information Sciences, University of California Merced, CA, USA
| | - Daniel Holman
- Cognitive and Information Sciences, University of California Merced, CA, USA
| | - Michael J Spivey
- Cognitive and Information Sciences, University of California Merced, CA, USA
| | - Christopher T Kello
- Cognitive and Information Sciences, University of California Merced, CA, USA
| |
Collapse
|
16
|
Abstract
Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension.
Collapse
|
17
|
Spivey MJ. The Emergence of Intentionality. Ecological Psychology 2013. [DOI: 10.1080/10407413.2013.810475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
18
|
Abstract
Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well.
Collapse
Affiliation(s)
- Stephanie Huette
- Cognitive and Information Sciences, University of California Merced, Merced, California, USA.
| | | | | | | |
Collapse
|
19
|
Pezzulo G, Barsalou LW, Cangelosi A, Fischer MH, McRae K, Spivey MJ. Computational Grounded Cognition: a new alliance between grounded cognition and computational modeling. Front Psychol 2013; 3:612. [PMID: 23346065 PMCID: PMC3551279 DOI: 10.3389/fpsyg.2012.00612] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2012] [Accepted: 12/21/2012] [Indexed: 11/30/2022] Open
Abstract
Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Computational Linguistic “A. Zampolli,” National Research CouncilPisa, Italy
- Institute of Cognitive Sciences and Technologies, National Research CouncilRome, Italy
| | | | - Angelo Cangelosi
- Centre for Robotics and Neural Systems, University of PlymouthPlymouth, UK
| | - Martin H. Fischer
- Division of Cognitive Sciences, University of PotsdamPotsdam, Germany
| | - Ken McRae
- Department of Psychology, Social Science Centre, University of Western OntarioLondon, ON, Canada
| | - Michael J. Spivey
- Cognitive and Information Sciences, University of CaliforniaMerced, CA, USA
| |
Collapse
|
20
|
Farmer TA, Cargill SA, Hindy NC, Dale R, Spivey MJ. Tracking the continuity of language comprehension: computer mouse trajectories suggest parallel syntactic processing. Cogn Sci 2012; 31:889-909. [PMID: 21635321 DOI: 10.1080/03640210701530797] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Although several theories of online syntactic processing assume the parallel activation of multiple syntactic representations, evidence supporting simultaneous activation has been inconclusive. Here, the continuous and non-ballistic properties of computer mouse movements are exploited, by recording their streaming x, y coordinates to procure evidence regarding parallel versus serial processing. Participants heard structurally ambiguous sentences while viewing scenes with properties either supporting or not supporting the difficult modifier interpretation. The curvatures of the elicited trajectories revealed both an effect of visual context and graded competition between simultaneously active syntactic representations. The results are discussed in the context of 3 major groups of theories within the domain of sentence processing.
Collapse
Affiliation(s)
- Thomas A Farmer
- Department of Psychology, Cornell UniversityUniversity of Memphis
| | | | | | | | | |
Collapse
|
21
|
Abstract
Spatial formats of information are ubiquitous in the cognitive and neural sciences. There are neural uses of space in the topographic maps found throughout cortex. There are metaphorical uses of space in cognitive linguistics, physical uses of space in ecological psychology, and mathematical uses of space in dynamical systems theory. These varied informational uses of space each provide a single contiguous medium through which cognitive processes can be shared across subsystems. As we further develop our understanding of how the human mind processes information in real time, the continuous sharing and cascading of information patterns between brain areas can be extended to a sharing and cascading of information between multiple brains and bodies to produce coordinated behavior. Essentially, the way you and the people around you negotiate your shared space affects the way you think, because space is a fundamental part of how you think. It is via space that the mental processes of one mind can form an intersection with the mental processes of another mind.
Collapse
Affiliation(s)
- Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, Merced, CA 95343, USA.
| |
Collapse
|
22
|
Anderson SE, Chiu E, Huette S, Spivey MJ. On the temporal dynamics of language-mediated vision and vision-mediated language. Acta Psychol (Amst) 2011; 137:181-9. [PMID: 20961519 DOI: 10.1016/j.actpsy.2010.09.008] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2010] [Revised: 09/12/2010] [Accepted: 09/23/2010] [Indexed: 10/18/2022] Open
Abstract
Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual search tasks, incremental spoken delivery of the target features (e.g., "Is there a red vertical?") can increase the efficiency of conjunction search because only one feature is heard at a time. Moreover, in spoken word recognition tasks, the visual presence of an object whose name is similar to the word being spoken (e.g., a candle present when instructed to "pick up the candy") can alter the process of comprehension. Dense sampling methods, such as eye-tracking and reach-tracking, richly illustrate the nature of this interaction, providing a semi-continuous measure of the temporal dynamics of individual behavioral responses. We review a variety of studies that demonstrate how these methods are particularly promising in further elucidating the dynamic competition that takes place between underlying linguistic and visual representations in multimodal contexts, and we conclude with a discussion of the consequences that these findings have for theories of embodied cognition.
Collapse
|
23
|
Abstract
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Collapse
Affiliation(s)
- Michael J Hove
- Music Cognition and Action Research Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | | | | |
Collapse
|
24
|
Pezzulo G, Barsalou LW, Cangelosi A, Fischer MH, McRae K, Spivey MJ. The mechanics of embodiment: a dialog on embodiment and computational modeling. Front Psychol 2011; 2:5. [PMID: 21713184 PMCID: PMC3111422 DOI: 10.3389/fpsyg.2011.00005] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2010] [Accepted: 01/04/2011] [Indexed: 11/13/2022] Open
Abstract
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamoring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensorimotor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialog between two fictional characters: Ernest, the "experimenter," and Mary, the "computational modeler." The dialog consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modeling.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche Roma, Italy
| | | | | | | | | | | |
Collapse
|
25
|
Abstract
Why are people more irritated by nearby cell-phone conversations than by conversations between two people who are physically present? Overhearing someone on a cell phone means hearing only half of a conversation—a “halfalogue.” We show that merely overhearing a halfalogue results in decreased performance on cognitive tasks designed to reflect the attentional demands of daily activities. By contrast, overhearing both sides of a cell-phone conversation or a monologue does not result in decreased performance. This may be because the content of a halfalogue is less predictable than both sides of a conversation. In a second experiment, we controlled for differences in acoustic factors between these types of overheard speech, establishing that it is the unpredictable informational content of halfalogues that results in distraction. Thus, we provide a cognitive explanation for why overheard cell-phone conversations are especially irritating: Less-predictable speech results in more distraction for a listener engaged in other tasks.
Collapse
Affiliation(s)
- Lauren L. Emberson
- Psychology Department, Cornell University
- Sackler Institute for Developmental Psychobiology, Weill-Cornell Medical College
| | - Gary Lupyan
- Psychology Department, University of Wisconsin–Madison
| | | | | |
Collapse
|
26
|
Spivey MJ, Dale R, Knoblich G, Grosjean M. Do curved reaching movements emerge from competing perceptions? A reply to van der Wel et al. (2009). J Exp Psychol Hum Percept Perform 2010; 36:251-4. [PMID: 20121308 DOI: 10.1037/a0017170] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Spivey, Grosjean, and Knoblich (2005) reported smoothly curved reaching movements, via computer-mouse tracking, which suggested a continuously evolving flow of distributed lexical activation patterns into motor movement during a phonological competitor task. For example, when instructed to click the "candy," participants' mouse-cursor trajectories curved conspicuously toward a picture of a candle before landing on the picture of the candy. In their commentary on this work, van der Wel, Eder, Mitchel, Walsh, and Rosenbaum (2009) describe a quantitative simulation of reaching movements that stands as an existence proof that a discrete-processing speech perception system can feed into a continuous-processing motor movement system to produce reach trajectories similar to that observed by Spivey et al. In this reply, we describe eye-tracking evidence, new mouse-tracking evidence, and a dynamic version of van der Wel et al's simulation, all of which suggest that competing perceptual representations may instigate the preparation of multiple movement plans that are merged in a dynamically weighted average, thus producing a single smoothly curved movement. Like van der Wel et al., we are optimistic that an emphasis on the computational linking hypothesis between hypothesized perceptual representations and recorded motor movements will elucidate the discrete versus continuous aspects of perceptual, cognitive, and motor processing.
Collapse
Affiliation(s)
- Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, CA 95343, USA.
| | | | | | | |
Collapse
|
27
|
|
28
|
Abstract
How do minds produce explicit attitudes over several hundred milliseconds? Speeded evaluative measures have revealed implicit biases beyond cognitive control and subjective awareness, yet mental processing may culminate in an explicit attitude that feels personally endorsed and corroborates voluntary intentions. We argue that self-reported explicit attitudes derive from a continuous, temporally dynamic process, whereby multiple simultaneously conflicting sources of information self-organize into a meaningful mental representation. While our participants reported their explicit (like vs. dislike) attitudes toward White versus Black people by moving a cursor to a "like" or "dislike" response box, we recorded streaming x- and y-coordinates from their hand-movement trajectories. We found that participants' hand-movement paths exhibited greater curvature toward the "dislike" response when they reported positive explicit attitudes toward Black people than when they reported positive explicit attitudes toward White people. Moreover, these trajectories were characterized by movement disorder and competitive velocity profiles that were predicted under the assumption that the deliberate attitudes emerged from continuous interactions between multiple simultaneously conflicting constraints.
Collapse
|
29
|
McMurray B, Aslin RN, Tanenhaus MK, Spivey MJ, Subik D. Gradient sensitivity to within-category variation in words and syllables. J Exp Psychol Hum Percept Perform 2009; 34:1609-31. [PMID: 19045996 DOI: 10.1037/a0011747] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Psychology, University of Iowa, Iowa City, IA 52242, USA.
| | | | | | | | | |
Collapse
|
30
|
|
31
|
Affiliation(s)
- Gary Lupyan
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA.
| | - Michael J Spivey
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA
| |
Collapse
|
32
|
|
33
|
Abstract
Through recording the streaming x, y coordinates of computer-mouse movements, we report evidence that visual context provides an immediate constraint on the resolution of syntactic ambiguity in the visual-world paradigm. This finding converges with previous eye-tracking results that support a constraint-based account of sentence processing, in which multiple partially-active syntactic alternatives compete against one another with the help of not only syntactic, semantic, and statistical factors, but also nonlinguistic factors such as visual context. Eye-tracking results in the visual-world paradigm are consistent with theories that posit limited interaction between context and syntax, but they are still consistent with related theories that allow immediate interaction but require serial pursuit of syntactic structures, such as the unrestricted race model. To tease apart the constraint-based and unrestricted-race accounts of sentence processing, the distribution of computer-mouse trajectories was analyzed for evidence of two populations of trials: those where only the correct parse was pursued and those where only the incorrect parse was pursued. We found no evidence of bimodality in the distribution of trajectory curvatures. Simulations with a constraint-based model produced trajectories that matched the human data. A nonlinguistic control study demonstrated the mouse-tracking paradigm's ability to elicit bimodal distributions of trajectory curvatures in certain experimental conditions. With effects of context posing a challenge for syntax-first models, and the absence of bimodality in the distribution of garden-path magnitude posing a challenge for unrestricted-race models, these converging methods support the constraint-based theory's account that the reason diverse contextual factors are able to bias one or another parse at the point of ambiguity is because those syntactic alternatives are continually partially-active in parallel, not discretely winnowed.
Collapse
|
34
|
Abstract
The time course of categorization was investigated in four experiments, which revealed graded competitive effects in a categorization task. Participants clicked one of two categories (e.g., mammal or fish) in response to atypical or typical exemplars (e.g., whale or cat) in the form of words (Experiments 1 and 2) or pictures (Experiments 3 and 4). Streaming x, y coordinates of mouse movement trajectories were recorded. Normalized mean trajectories revealed a graded competitive process: Atypical exemplars produced trajectories with greater curvature toward the competing category than did typical exemplars. The experiments contribute to recent examination of the time course of categorization and carry implications for theories of representation in cognitive science.
Collapse
Affiliation(s)
- Rick Dale
- Cornell University, Ithaca, New York, USA.
| | | | | |
Collapse
|
35
|
|
36
|
Reali F, Spivey MJ, Tyler MJ, Terranova J. Inefficient conjunction search made efficient by concurrent spoken delivery of target identity. ACTA ACUST UNITED AC 2006; 68:959-74. [PMID: 17153191 DOI: 10.3758/bf03193358] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search based on a conjunction of two features typically elicits reaction times that increase linearly as a function of the number of distractors, whereas search based on a single feature is essentially unaffected by set size. These and related findings have often been interpreted as evidence of a serial search stage that follows a parallel search stage. However, a wide range of studies has been showing a form of blending of these two processes. For example, when a spoken instruction identifies the conjunction target concurrently with the visual display, the effect of set size is significantly reduced, suggesting that incremental linguistic processing of the first feature adjective and then the second feature adjective may facilitate something approximating a parallel extraction of objects during search for the target. Here, we extend these results to a variety of experimental designs. First, we replicate the result with a mixed-trials design (ruling out potential strategies associated with the blocked design of the original study). Second, in a mixed-trials experiment, the order of adjective types in the spoken query varies randomly across conditions. In a third experiment, we extend the effect to a triple-conjunction search task. A fourth (control) experiment demonstrates that these effects are not due to an efficient odd-one-out search that ignores the linguistic input. This series of experiments, along with attractor-network simulations of the phenomena, provide further evidence toward understanding linguistically mediated influences in real-time visual search processing.
Collapse
Affiliation(s)
- Florencia Reali
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | | | | | |
Collapse
|
37
|
Dale R, Spivey MJ. From apples and oranges to symbolic dynamics: a framework for conciliating notions of cognitive representation. J EXP THEOR ARTIF IN 2005. [DOI: 10.1080/09528130500283766] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
38
|
Abstract
Certain models of spoken-language processing, like those for many other perceptual and cognitive processes, posit continuous uptake of sensory input and dynamic competition between simultaneously active representations. Here, we provide compelling evidence for this continuity assumption by using a continuous response, hand movements, to track the temporal dynamics of lexical activations during real-time spoken-word recognition in a visual context. By recording the streaming x, y coordinates of continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phonetic input and competition between partially active lexical representations are revealed in the shape of the movement trajectories. This hand-movement paradigm allows one to project the internal processing of spoken-word recognition onto a two-dimensional layout of continuous motor output, providing a concrete visualization of the attractor dynamics involved in language processing.
Collapse
Affiliation(s)
- Michael J Spivey
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | | | |
Collapse
|
39
|
|
40
|
Abstract
Overt visual attention during diagram-based problem solving, as measured by eye movements, has been used in numerous studies to reveal critical aspects of the problem-solving process that traditional measures like solution time and accuracy cannot address. In Experiment 1, we used this methodology to show that particular fixation patterns correlate with success in solving the tumor-and-lasers radiation problem. Given this correlation between attention to a particular diagram feature and problem-solving insight, we investigated participants' cognitive sensitivity to perceptual changes in that diagram feature. In Experiment 2, we found that perceptually highlighting the critical diagram component, identified in Experiment 1, significantly increased the frequency of correct solutions. Taking a situated perspective on cognition, we suggest that environmentally controlled perceptual properties can guide attention and eye movements in ways that assist in developing problem-solving insights that dramatically improve reasoning.
Collapse
|
41
|
McMurray B, Tanenhaus MK, Aslin RN, Spivey MJ. Probabilistic constraint satisfaction at the lexical/phonetic interface: evidence for gradient effects of within-category VOT on lexical access. J Psycholinguist Res 2003; 32:77-97. [PMID: 12647564 DOI: 10.1023/a:1021937116271] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Research in speech perception has been dominated by a search for invariant properties of the signal that correlate with lexical and sublexical categories. We argue that this search for invariance has led researchers to ignore the perceptual consequences of systematic variation within such categories and that sensitivity to this variation may provide an important source of information for integrating information over time in speech perception. Data from a study manipulating VOT continua in words using an eye-movement paradigm indicate that lexical access shows graded sensitivity to within-category variation in VOT and that this sensitivity has a duration sufficient to be useful for information integration. These data support a model in which the perceptual system integrates information from multiple sources and from the surrounding temporal context using probabilistic cue-weighting mechanisms.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | | | | | | |
Collapse
|
42
|
Spivey MJ, Tanenhaus MK, Eberhard KM, Sedivy JC. Eye movements and spoken language comprehension: effects of visual context on syntactic ambiguity resolution. Cogn Psychol 2002; 45:447-81. [PMID: 12480476 DOI: 10.1016/s0010-0285(02)00503-0] [Citation(s) in RCA: 191] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
When participants follow spoken instructions to pick up and move objects in a visual workspace, their eye movements to the objects are closely time-locked to referential expressions in the instructions. Two experiments used this methodology to investigate the processing of the temporary ambiguities that arise because spoken language unfolds over time. Experiment 1 examined the processing of sentences with a temporarily ambiguous prepositional phrase (e.g., "Put the apple on the towel in the box") using visual contexts that supported either the normally preferred initial interpretation (the apple should be put on the towel) or the less-preferred interpretation (the apple is already on the towel and should be put in the box). Eye movement patterns clearly established that the initial interpretation of the ambiguous phrase was the one consistent with the context. Experiment 2 replicated these results using prerecorded digitized speech to eliminate any possibility of prosodic differences across conditions or experimenter demand. Overall, the findings are consistent with a broad theoretical framework in which real-time language comprehension immediately takes into account a rich array of relevant nonlinguistic context.
Collapse
Affiliation(s)
- Michael J Spivey
- Department of Psychology, Cornell University, 238 Uris Hall, Ithaca, NY 14853, USA.
| | | | | | | |
Collapse
|
43
|
Abstract
It is hypothesized that eye movements are used to coordinate elements of a mental model with elements of the visual field. In two experiments, eye movements were recorded while observers imagined or recalled objects that were not present in the visual display. In both cases, observers spontaneously looked at particular blank regions of space in a systematic fashion, to manipulate and organize spatial relationships between mental and/or retinal images. These results contribute to evidence that interpreting a linguistic description of a visual scene requires a spatial (mental model) representation, and they support claims regarding the allocation of position markers in visual space for the manipulation of visual attention. More broadly, our results point to a concrete embodiment of cognition, in that a construction of a mental image is almost "acted out" by the eye movements, and a mental search of internal memory is accompanied by an ocolumotor search of external space.
Collapse
Affiliation(s)
- M J Spivey
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
44
|
Abstract
During an individual's normal interaction with the environment and other humans, visual and linguistic signals often coincide and can be integrated very quickly. This has been clearly demonstrated in recent eye tracking studies showing that visual perception constrains on-line comprehension of spoken language. In a modified visual search task, we found the inverse, that real-time language comprehension can also constrain visual perception. In standard visual search tasks, the number of distractors in the display strongly affects search time for a target defined by a conjunction of features, but not for a target defined by a single feature. However we found that when a conjunction target was identified by a spoken instruction presented concurrently with the visual display, the incremental processing of spoken language allowed the search process to proceed in a manner considerably less affected by the number of distractors. These results suggest that perceptual systems specializedfor language and for vision interact more fluidly than previously thought.
Collapse
Affiliation(s)
- M J Spivey
- Department of Psychology, Cornell University, Ithaca, NY 14583, USA.
| | | | | | | |
Collapse
|
45
|
Abstract
One's being able to allocate attention to particular regions or properties of the visual field is fundamental to visual information processing. Visual attention determines what input is carefully analyzed and what input is more or less ignored. But at what stage of the visual system is this process evident? We describe three experiments that demonstrate an effect of voluntary spatial attention and voluntary object-based attention on an orientation illusion (the tilt aftereffect) that is believed to take place in primary visual cortex. This finding, in which selective visual attention influences adaptation to visual orientation information, contributes to mounting evidence for a view of visual perception in which mutual interaction takes place between high-level and low-level subsystems.
Collapse
Affiliation(s)
- M J Spivey
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
46
|
Abstract
It has been argued that the human cognitive system is capable of using spatial indexes or oculomotor coordinates to relieve working memory load (Ballard, D. H., Hayhoe, M. M., Pook, P. K., & Rao, R. P. N. (1997). Behavioral and Brain Sciences, 20(4), 723), track multiple moving items through occlusion (Scholl, D. J., & Pylyshyn, Z. W. (1999). Cognitive Psychology, 38, 259) or link incompatible cognitive and sensorimotor codes (Bridgeman, B., & Huemer, V. (1998). Consciousness and Cognition, 7, 454). Here we examine the use of such spatial information in memory for semantic information. Previous research has often focused on the role of task demands and the level of automaticity in the encoding of spatial location in memory tasks. We present five experiments where location is irrelevant to the task, and participants' encoding of spatial information is measured implicitly by their looking behavior during recall. In a paradigm developed from Spivey and Geng (Spivey, M. J., & Geng, J. (2000). submitted for publication), participants were presented with pieces of auditory, semantic information as part of an event occurring in one of four regions of a computer screen. In front of a blank grid, they were asked a question relating to one of those facts. Under certain conditions it was found that during the question period participants made significantly more saccades to the empty region of space where the semantic information had been previously presented. Our findings are discussed in relation to previous research on memory and spatial location, the dorsal and ventral streams of the visual system, and the notion of a cognitive-perceptual system using spatial indexes to exploit the stability of the external world.
Collapse
Affiliation(s)
- D C Richardson
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
47
|
Spivey MJ, Tanenhaus MK. Syntactic ambiguity resolution in discourse: modeling the effects of referential context and lexical frequency. J Exp Psychol Learn Mem Cogn 1999. [PMID: 9835064 DOI: 10.1037//0278-7393.24.6.1521] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Sentences with temporarily ambiguous reduced relative clauses (e.g., The actress selected by the director believed that...) were preceded by discourse contexts biasing a main clause or a relative clause. Eye movements in the disambiguating region (by the director) revealed that, in the relative clause biasing contexts, ambiguous reduced relatives were no more difficult to process than unambiguous reduced relatives or full (unreduced) relatives. Regression analyses demonstrated that the effects of discourse context at the point of ambiguity (e.g., selected) interacted with the past participle frequency of the ambiguous verb. Reading times were modeled using a constraint-based competition framework in which multiple constraints are immediately integrated during parsing and interpretation. Simulations suggested that this framework reconciles the superficially conflicting results in the literature on referential context effects on syntactic ambiguity resolution.
Collapse
Affiliation(s)
- M J Spivey
- Department of Psychology, Cornell University, Ithaca, New York 14853-9365, USA.
| | | |
Collapse
|
48
|
Spivey MJ, Tanenhaus MK. Syntactic ambiguity resolution in discourse: modeling the effects of referential context and lexical frequency. J Exp Psychol Learn Mem Cogn 1998; 24:1521-43. [PMID: 9835064 DOI: 10.1037/0278-7393.24.6.1521] [Citation(s) in RCA: 99] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Sentences with temporarily ambiguous reduced relative clauses (e.g., The actress selected by the director believed that...) were preceded by discourse contexts biasing a main clause or a relative clause. Eye movements in the disambiguating region (by the director) revealed that, in the relative clause biasing contexts, ambiguous reduced relatives were no more difficult to process than unambiguous reduced relatives or full (unreduced) relatives. Regression analyses demonstrated that the effects of discourse context at the point of ambiguity (e.g., selected) interacted with the past participle frequency of the ambiguous verb. Reading times were modeled using a constraint-based competition framework in which multiple constraints are immediately integrated during parsing and interpretation. Simulations suggested that this framework reconciles the superficially conflicting results in the literature on referential context effects on syntactic ambiguity resolution.
Collapse
Affiliation(s)
- M J Spivey
- Department of Psychology, Cornell University, Ithaca, New York 14853-9365, USA.
| | | |
Collapse
|