51
|
Dampuré J, Ros C, Rouet JF, Vibert N. Task-dependent sensitisation of perceptual and semantic processing during visual search for words. JOURNAL OF COGNITIVE PSYCHOLOGY 2014. [DOI: 10.1080/20445911.2014.907576] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
52
|
Vales C, Smith LB. Words, shape, visual search and visual working memory in 3-year-old children. Dev Sci 2014; 18:65-79. [PMID: 24720802 DOI: 10.1111/desc.12179] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2013] [Accepted: 12/13/2013] [Indexed: 11/30/2022]
Abstract
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information.
Collapse
Affiliation(s)
- Catarina Vales
- Department of Psychological and Brain Sciences, Indiana University, USA
| | | |
Collapse
|
53
|
Cavicchio F, Melcher D, Poesio M. The effect of linguistic and visual salience in visual world studies. Front Psychol 2014; 5:176. [PMID: 24624108 PMCID: PMC3941304 DOI: 10.3389/fpsyg.2014.00176] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Accepted: 02/13/2014] [Indexed: 11/13/2022] Open
Abstract
Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.
Collapse
Affiliation(s)
- Federica Cavicchio
- Center for Mind/Brain Sciences, Università di Trento Rovereto, Italy ; School of Psychology, University of Birmingham Birmingham, UK
| | - David Melcher
- Center for Mind/Brain Sciences, Università di Trento Rovereto, Italy
| | - Massimo Poesio
- Center for Mind/Brain Sciences, Università di Trento Rovereto, Italy ; School for Computer Science and Electronic Engineering, University of Essex Essex, UK
| |
Collapse
|
54
|
Jahn G, Braatz J. Memory indexing of sequential symptom processing in diagnostic reasoning. Cogn Psychol 2014; 68:59-97. [DOI: 10.1016/j.cogpsych.2013.11.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2013] [Revised: 11/11/2013] [Accepted: 11/12/2013] [Indexed: 01/02/2023]
|
55
|
Tammik V, Toomela A. Relationships between visual figure discrimination, verbal abilities, and gender. Perception 2014; 42:971-84. [PMID: 24386716 DOI: 10.1068/p7607] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
This study investigated the relationships between verbal thinking and performance on visual figure discrimination tasks from a Vygotskian perspective in a large varied adult sample (N = 428). A test designed to assess the structure of word meanings (ie tendency to think in 'everyday' or 'scientific' concepts as distinguished by Vygotsky) together with two contour picture tasks was presented. Visual tasks were a modified version of Poppelreuter's overlapping figures and a picture depicting a meaningful scene. On both tasks concrete objects and abstract meaningless shapes had to be identified. In addition to relationships between visual task performance and word meaning structure, the effects of the meaningful scene and relations with gender were examined. The results confirmed the expected relation between word meaning structure and visual performance. Furthermore, they suggested a specific effect of the meaningful whole and a male advantage, especially for the first task in which women seemed to benefit less from advanced word meaning structure.
Collapse
Affiliation(s)
- Valdar Tammik
- Institute of Psychology, Tallinn University, Narva Road 29, 10120 Tallinn, Estonia.
| | - Aaro Toomela
- Institute of Psychology, Tallinn University, Narva Road 29, 10120 Tallinn, Estonia
| |
Collapse
|
56
|
Gauvin HS, Hartsuiker RJ, Huettig F. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking. Front Hum Neurosci 2013; 7:818. [PMID: 24339809 PMCID: PMC3857580 DOI: 10.3389/fnhum.2013.00818] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 11/11/2013] [Indexed: 11/21/2022] Open
Abstract
The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
Collapse
Affiliation(s)
- Hanna S Gauvin
- Department of Experimental Psychology, Ghent University Ghent, Belgium
| | | | | |
Collapse
|
57
|
Norbury CF. Sources of variation in developmental language disorders: evidence from eye-tracking studies of sentence production. Philos Trans R Soc Lond B Biol Sci 2013; 369:20120393. [PMID: 24324237 PMCID: PMC3866423 DOI: 10.1098/rstb.2012.0393] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Skilled sentence production involves distinct stages of message conceptualization (deciding what to talk about) and message formulation (deciding how to talk about it). Eye-movement paradigms provide a mechanism for observing how speakers accomplish these aspects of production in real time. These methods have recently been applied to children with autism spectrum disorder (ASD) and specific language impairment (LI) in an effort to reveal qualitative differences between groups in sentence production processes. Findings support a multiple-deficit account in which language production is influenced not only by lexical and syntactic constraints, but also by variation in attention control, inhibition and social competence. Thus, children with ASD are especially vulnerable to atypical patterns of visual inspection and verbal utterance. The potential to influence attentional focus and prime appropriate language structures are considered as a mechanism for facilitating language adaptation and learning.
Collapse
|
58
|
Interference of spoken word recognition through phonological priming from visual objects and printed words. Atten Percept Psychophys 2013; 76:190-200. [DOI: 10.3758/s13414-013-0560-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
59
|
|
60
|
Smith AC, Monaghan P, Huettig F. An amodal shared resource model of language-mediated visual attention. Front Psychol 2013; 4:528. [PMID: 23966967 PMCID: PMC3744873 DOI: 10.3389/fpsyg.2013.00528] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Accepted: 07/26/2013] [Indexed: 11/13/2022] Open
Abstract
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
Collapse
Affiliation(s)
- Alastair C Smith
- Max Planck Institute for Psycholinguistics Nijmegen, Netherlands ; International Max Planck Research School for Language Sciences Nijmegen, Netherlands
| | | | | |
Collapse
|
61
|
Activation of phonological competitors in visual search. Acta Psychol (Amst) 2013; 143:168-75. [PMID: 23584102 DOI: 10.1016/j.actpsy.2013.03.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2012] [Revised: 03/13/2013] [Accepted: 03/14/2013] [Indexed: 11/20/2022] Open
Abstract
Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed.
Collapse
|
62
|
Arai M, Keller F. The use of verb-specific information for prediction in sentence processing. ACTA ACUST UNITED AC 2013. [DOI: 10.1080/01690965.2012.658072] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
63
|
Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? PROGRESS IN BRAIN RESEARCH 2013; 202:135-49. [DOI: 10.1016/b978-0-444-62604-2.00008-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
64
|
Huettig F, Mishra RK, Olivers CNL. Mechanisms and representations of language-mediated visual attention. Front Psychol 2012; 2:394. [PMID: 22291672 PMCID: PMC3253411 DOI: 10.3389/fpsyg.2011.00394] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Accepted: 12/20/2011] [Indexed: 11/13/2022] Open
Abstract
The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical.
Collapse
Affiliation(s)
- Falk Huettig
- Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Ramesh Kumar Mishra
- Centre of Behavioral and Cognitive Sciences, University of AllahabadAllahabad, India
| | | |
Collapse
|
65
|
Knoeferle P, Carminati MN, Abashidze D, Essig K. Preferential Inspection of Recent Real-World Events Over Future Events: Evidence from Eye Tracking during Spoken Sentence Comprehension. Front Psychol 2011; 2:376. [PMID: 22207858 PMCID: PMC3245670 DOI: 10.3389/fpsyg.2011.00376] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2011] [Accepted: 11/28/2011] [Indexed: 11/18/2022] Open
Abstract
Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn’t) modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes), and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries). At the verb, people more often inspected the pancakes (the recent target) than the strawberries (the future target), thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants’ visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a) recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects), and (b) people prefer to first inspect a recent action target (vs. an object that will soon be acted upon), even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these findings.
Collapse
Affiliation(s)
- Pia Knoeferle
- Cognitive Interaction Technology Excellence Cluster, Bielefeld University Bielefeld, Germany
| | | | | | | |
Collapse
|
66
|
Huettig F, Singh N, Mishra RK. Language-mediated visual orienting behavior in low and high literates. Front Psychol 2011; 2:285. [PMID: 22059083 PMCID: PMC3203553 DOI: 10.3389/fpsyg.2011.00285] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2011] [Accepted: 10/10/2011] [Indexed: 11/24/2022] Open
Abstract
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., "magar," crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., "matar," peas; a semantic competitor, e.g., "kachuwa," turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
Collapse
Affiliation(s)
- Falk Huettig
- Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Niharika Singh
- Centre of Behavioral and Cognitive Sciences, University of AllahabadAllahabad, India
| | - Ramesh Kumar Mishra
- Centre of Behavioral and Cognitive Sciences, University of AllahabadAllahabad, India
| |
Collapse
|
67
|
Apel JK, Revie GF, Cangelosi A, Ellis R, Goslin J, Fischer MH. Attention deployment during memorizing and executing complex instructions. Exp Brain Res 2011; 214:249-59. [PMID: 21842191 DOI: 10.1007/s00221-011-2827-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2010] [Accepted: 08/01/2011] [Indexed: 10/17/2022]
Abstract
We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.
Collapse
Affiliation(s)
- Jens K Apel
- School of Psychology, University of Dundee, Dundee DD1 4HN, Scotland, UK.
| | | | | | | | | | | |
Collapse
|
68
|
Johnson EK, McQueen JM, Huettig F. Toddlers' language-mediated visual search: they need not have the words for it. Q J Exp Psychol (Hove) 2011; 64:1672-82. [PMID: 21812709 DOI: 10.1080/17470218.2011.594165] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.
Collapse
|
69
|
Hartsuiker RJ, Huettig F, Olivers CNL. Visual search and visual world: interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychol (Amst) 2011; 137:135-7. [PMID: 21296308 DOI: 10.1016/j.actpsy.2011.01.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2011] [Accepted: 01/07/2011] [Indexed: 11/30/2022] Open
|
70
|
The nature of the visual environment induces implicit biases during language-mediated visual search. Mem Cognit 2011; 39:1068-84. [PMID: 21461784 DOI: 10.3758/s13421-011-0086-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|