1
|
Damiano C, Leemans M, Wagemans J. Exploring the Semantic-Inconsistency Effect in Scenes Using a Continuous Measure of Linguistic-Semantic Similarity. Psychol Sci 2024; 35:623-634. [PMID: 38652604 DOI: 10.1177/09567976241238217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
Abstract
Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.
Collapse
Affiliation(s)
- Claudia Damiano
- Department of Psychology, University of Toronto
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| |
Collapse
|
2
|
Mulder K, Brand S, Boves L, Ernestus M. Processing reduced speech in the L1 and L2: a combined eye-tracking and ERP study. LANGUAGE, COGNITION AND NEUROSCIENCE 2024; 39:527-551. [PMID: 38812796 PMCID: PMC11132548 DOI: 10.1080/23273798.2024.2344162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 03/15/2024] [Indexed: 05/31/2024]
Abstract
We examined the cognitive processes underlying the comprehension of reduced word pronunciation variants in natives and advanced learners of French. In a passive listening visual world task, participants heard sentences containing either a reduced or a full form and saw pictures representing the target word, a phonological competitor and two neutral distractors. After each sentence they saw a picture and had to decide whether it matched the content of that sentence. Eye movements and EEG were recorded simultaneously. Because the two recordings offer complementary information about cognitive processes, we developed methods for analysing the signals in combination. We found a stronger effect of reduction on phonetic processing and semantic integration in learners than in natives, but the effects are different from the N100/N400 and P600 effects found in previous research. Time-locking EEG signals on fixation moments in the eye movements offers a window onto the time course of semantic integration.
Collapse
Affiliation(s)
- Kimberley Mulder
- Amsterdam Center for Language and Communication, University of Amsterdam, Amsterdam, The Netherlands
| | - Sophie Brand
- Radboud Teachers Academy, Nijmegen, The Netherlands
| | - Lou Boves
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Mirjam Ernestus
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Hayakawa S, Marian V. Sound-meaning associations allow listeners to infer the meaning of foreign language words. COMMUNICATIONS PSYCHOLOGY 2023; 1:30. [PMID: 38152075 PMCID: PMC10751683 DOI: 10.1038/s44271-023-00030-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 10/04/2023] [Indexed: 12/29/2023]
Abstract
An attribute of human language is the seemingly arbitrary association between a word's form and meaning. We provide evidence that the meaning of foreign words can be partially deduced from phonological form. Monolingual English speakers listened to 45 antonym word pairs in nine foreign languages and judged which English words corresponded to these words' respective meanings. Despite no proficiency in the foreign language tested, participants' accuracy was higher than chance in each language. Words that shared meaning across languages were more likely to share phonological form. Accuracy in judging meaning from form was associated with participants' verbal working memory and with how consistently phonological and semantic features of words covaried across unrelated languages. A follow-up study with native Spanish speakers replicated the results. We conclude that sound maps to meaning in natural languages with some regularity, and sensitivity to form-meaning mappings indexes broader cognitive functions.
Collapse
|
4
|
Duta M, Plunkett K. A network model of referent identification by toddlers in a visual world task. Child Dev 2023; 94:1511-1530. [PMID: 37794728 DOI: 10.1111/cdev.14010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/19/2023] [Accepted: 08/03/2023] [Indexed: 10/06/2023]
Abstract
We present a neural network model of referent identification in a visual world task. Inputs are visual representations of item pairs unfolding with sequences of phonemes identifying the target item. The model is trained to output the semantic representation of the target and to suppress the distractor. The training set uses a 200-word lexicon typically known by toddlers. The phonological, visual, and semantic representations are derived from real corpora. Successful performance requires correct association between labels and visual and semantic representations, as well as correct location identification. The model reproduces experimental evidence that phonological, perceptual, and categorical relationships modulate item preferences. The model provides an account of how language can drive visual attention in the inter-modal preferential looking task.
Collapse
Affiliation(s)
- Mihaela Duta
- Doctoral Training Centre, University of Oxford, Oxford, UK
| | - Kim Plunkett
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
5
|
Vibert N, Darles D, Ros C, Braasch JLG, Rouet JF. Looking for a word or for its meaning? The impact of induction tasks on adolescents' visual search for verbal material. Mem Cognit 2023; 51:1562-1579. [PMID: 37079250 DOI: 10.3758/s13421-023-01420-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2023] [Indexed: 04/21/2023]
Abstract
An eye-tracking experiment was conducted to examine whether the pre-activation of different word-processing pathways by means of semantic versus perceptual induction tasks could modify the way adults and 11- to 15-year-old adolescents searched for single target words within displays of nine words. The presence within the search displays of words either looking like the target word or semantically related to the target word was manipulated. The quality of participants' lexical representations was evaluated through three word-identification and vocabulary tests. Performing a semantic induction task rather than a perceptual one on the target word before searching for it increased search times by 15% in all age groups, reflecting an increase in both the number and duration of gazes directed to non-target words. Moreover, performing the semantic induction task increased the impact of distractor words that were semantically related to the target word on search efficiency. Participants' search efficiency increased with age because of a progressive increase in the quality of adolescents' lexical representations, which allowed participants to more quickly reject the distractors on which they fixated. Indeed, lexical quality scores explained 43% of the variance in search times independently of participants' age. In the simple visual search task used in this study, fostering semantic word processing through the semantic induction task slowed down visual search. However, the literature suggests that semantic induction tasks could, in contrast, help people find information more easily in more complex verbal environments where the meaning of words must be accessed to find task-relevant information.
Collapse
Affiliation(s)
- Nicolas Vibert
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS UMR 7295, Université de Poitiers, Université de Tours; Maison des Sciences de l'Homme et de la Société, Bâtiment A5, 5 rue Théodore Lefebvre, TSA 21103, 86073, Poitiers cedex 9, France.
| | - Daniel Darles
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS UMR 7295, Université de Poitiers, Université de Tours; Maison des Sciences de l'Homme et de la Société, Bâtiment A5, 5 rue Théodore Lefebvre, TSA 21103, 86073, Poitiers cedex 9, France
| | - Christine Ros
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS UMR 7295, Université de Poitiers, Université de Tours; Maison des Sciences de l'Homme et de la Société, Bâtiment A5, 5 rue Théodore Lefebvre, TSA 21103, 86073, Poitiers cedex 9, France
| | - Jason L G Braasch
- College of Education and Human Development, Georgia State University, Atlanta, GA, 30302, USA
| | - Jean-François Rouet
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS UMR 7295, Université de Poitiers, Université de Tours; Maison des Sciences de l'Homme et de la Société, Bâtiment A5, 5 rue Théodore Lefebvre, TSA 21103, 86073, Poitiers cedex 9, France
| |
Collapse
|
6
|
Hintz F, Voeten CC, Scharenborg O. Recognizing non-native spoken words in background noise increases interference from the native language. Psychon Bull Rev 2023; 30:1549-1563. [PMID: 36544064 PMCID: PMC10482792 DOI: 10.3758/s13423-022-02233-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2022] [Indexed: 12/24/2022]
Abstract
Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition - especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one's native language during non-native spoken-word recognition under adverse conditions.
Collapse
Affiliation(s)
- Florian Hintz
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands.
| | | | - Odette Scharenborg
- Multimedia Computing Group, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
7
|
Sui C, Wen H, Han J, Chen T, Gao Y, Wang Y, Yang L, Guo L. Decreased gray matter volume in the right middle temporal gyrus associated with cognitive dysfunction in preeclampsia superimposed on chronic hypertension. Front Neurosci 2023; 17:1138952. [PMID: 37250424 PMCID: PMC10217781 DOI: 10.3389/fnins.2023.1138952] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
Introduction The effects of preeclampsia superimposed on chronic hypertension (CHTN-PE) on the structure and function of the human brain are mostly unknown. The purpose of this study was to examine altered gray matter volume (GMV) and its correlation with cognitive function in pregnant healthy women, healthy non-pregnant individuals, and CHTN-PE patients. Methods Twenty-five CHTN-PE patients, thirty-five pregnant healthy controls (PHC) and thirty-five non-pregnant healthy controls (NPHC) were included in this study and underwent cognitive assessment testing. A voxel-based morphometry (VBM) approach was applied to investigate variations in brain GMV among the three groups. Pearson's correlations between mean GMV and the Stroop color-word test (SCWT) scores were calculated. Results Compared with the NPHC group, the PHC and CHTN-PE groups showed significantly decreased GMV in a cluster of the right middle temporal gyrus (MTG), and the GMV decrease was more significant in the CHTN-PE group. There were significant differences in the Montreal Cognitive Assessment (MoCA) and Stroop word scores among the three groups. Notably, the mean GMV values in the right MTG cluster were not only significantly negatively correlated with Stroop word and Stroop color scores but also significantly distinguished CHTN-PE patients from the NPHC and PHC groups in receiver operating characteristic curve analysis. Discussion Pregnancy may cause a decrease in local GMV in the right MTG, and the GMV decrease is more significant in CHTN-PE patients. The right MTG affects multiple cognitive functions, and combined with the SCWT scores, it may explain the decline in speech motor function and cognitive flexibility in CHTN-PE patients.
Collapse
Affiliation(s)
- Chaofan Sui
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Hongwei Wen
- Key Laboratory of Cognition and Personality, Ministry of Education, Faculty of Psychology, Southwest University, Chongqing, China
| | - Jingchao Han
- Department of Medical Imaging, Jinan Stomatological Hospital, Jinan, Shandong, China
| | - Tao Chen
- Department of Clinical Laboratory, Jinan Maternity and Child Care Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Yian Gao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Yuanyuan Wang
- Department of Radiology, Binzhou Medical University, Yantai, Shandong, China
| | - Linfeng Yang
- Department of Radiology, Jinan Maternity and Child Care Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Lingfei Guo
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| |
Collapse
|
8
|
Knabe ML, Vlach HA. Not all is forgotten: Children's associative matrices for features of a word learning episode. Dev Sci 2023; 26:e13291. [PMID: 35622834 DOI: 10.1111/desc.13291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 05/01/2022] [Accepted: 05/12/2022] [Indexed: 11/28/2022]
Abstract
Word learning studies traditionally examine the narrow link between words and objects, indifferent to the rich contextual information surrounding objects. This research examined whether children attend to this contextual information and construct an associative matrix of the words, objects, people, and environmental context during word learning. In Experiment 1, preschool-aged children (age: 3;2-5;11 years) were presented with novel words and objects in an animated storybook. Results revealed that children constructed associations beyond words and objects. Specifically, children attended to and had the strongest associations for features of the environmental context but failed to learn word-object associations. Experiment 2 demonstrated that children (age: 3;0-5;8 years) leveraged strong associations for the person and environmental context to support word-object mapping. This work demonstrates that children are especially sensitive to the word learning context and use associative matrices to support word mapping. Indeed, this research suggests associative matrices of the environment may be foundational for children's vocabulary development.
Collapse
Affiliation(s)
- Melina L Knabe
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Haley A Vlach
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
9
|
Garrido Rodriguez G, Norcliffe E, Brown P, Huettig F, Levinson SC. Anticipatory Processing in a Verb-Initial Mayan Language: Eye-Tracking Evidence During Sentence Comprehension in Tseltal. Cogn Sci 2023; 47:e13292. [PMID: 36652288 DOI: 10.1111/cogs.13219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 04/13/2022] [Accepted: 09/11/2022] [Indexed: 01/19/2023]
Abstract
We present a visual world eye-tracking study on Tseltal (a Mayan language) and investigate whether verbal information can be used to anticipate an upcoming referent. Basic word order in transitive sentences in Tseltal is Verb-Object-Subject (VOS). The verb is usually encountered first, making argument structure and syntactic information available at the outset, which should facilitate anticipation of the post-verbal arguments. Tseltal speakers listened to verb-initial sentences with either an object-predictive verb (e.g., "eat") or a general verb (e.g., "look for") (e.g., "Ya slo'/sle ta stukel on te kereme," Is eating/is looking (for) by himself the avocado the boy/ "The boy is eating/is looking (for) an avocado by himself") while seeing a visual display showing one potential referent (e.g., avocado) and three distractors (e.g., bag, toy car, coffee grinder). We manipulated verb type (predictive vs. general) and recorded participants' eye movements while they listened and inspected the visual scene. Participants' fixations to the target referent were analyzed using multilevel logistic regression models. Shortly after hearing the predictive verb, participants fixated the target object before it was mentioned. In contrast, when the verb was general, fixations to the target only started to increase once the object was heard. Our results suggest that Tseltal hearers pre-activate semantic features of the grammatical object prior to its linguistic expression. This provides evidence from a verb-initial language for online incremental semantic interpretation and anticipatory processing during language comprehension. These processes are comparable to the ones identified in subject-initial languages, which is consistent with the notion that different languages follow similar universal processing principles.
Collapse
Affiliation(s)
- Gabriela Garrido Rodriguez
- Language and Cognition Department, Max Planck Institute for Psycholinguistics.,Language Development Department, Max Planck Institute for Psycholinguistics.,School of Languages and Linguistics, The University of Melbourne.,ARC Centre of Excellence for the Dynamics of Language, The University of Melbourne
| | | | - Penelope Brown
- Language Development Department, Max Planck Institute for Psycholinguistics
| | - Falk Huettig
- Psychology of Language Department, Max Planck Institute for Psycholinguistics.,Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen.,Centre for Language Studies, Radboud University Nijmegen
| | - Stephen C Levinson
- Language and Cognition Department, Max Planck Institute for Psycholinguistics.,Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen
| |
Collapse
|
10
|
Tichenor SE, Wray AH, Ravizza SM, Yaruss JS. Individual differences in attentional control predict working memory capacity in adults who stutter. JOURNAL OF COMMUNICATION DISORDERS 2022; 100:106273. [PMID: 36274445 DOI: 10.1016/j.jcomdis.2022.106273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 10/03/2022] [Accepted: 10/09/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Prior research has suggested that people who stutter exhibit differences in some working memory tasks, particularly when more phonologically complex stimuli are used. This study aimed to further specify working memory differences in adults who stutter by not only accounting for linguistic demands of the stimuli but also individual differences in attentional control and experimental influences, such as concomitant processing requirements. METHOD This study included 40 adults who stutter and 42 adults who do not stutter who completed the Attention Network Test (ANT; Fan et al., 2002) and three complex span working memory tasks: the Operation Span (OSPAN), Rotation Span, and Symmetry Span (Draheim et al., 2018; Foster et al., 2015; Unsworth et al., 2005, 2009). All complex span tasks were dual-tasks and varied in linguistic content in task stimuli. RESULTS Working memory capacities demonstrated by adults who stutter paralleled the hierarchy of linguistic content across the three complex span tasks, with statistically significant between-group differences in working memory capacity apparent in the task with the highest linguistic demand (i.e., OSPAN). Individual differences in attentional control in adults who stutter also significantly predicted working memory capacity on the OSPAN. DISCUSSION Findings from this study extend existing working memory research in stuttering by showing that: (1) significant working memory differences are present between adults who stutter and adults who do not stutter even using relatively simple linguistic stimuli in dual-task working memory conditions; (2) adults who stutter with stronger executive control of attention demonstrate working memory capacity more comparable to adults who do not stutter on the OSPAN compared to adults who stutter with lower executive control of attention.
Collapse
|
11
|
Cimminella F, D'Innocenzo G, Sala SD, Iavarone A, Musella C, Coco MI. Preserved Extra-Foveal Processing of Object Semantics in Alzheimer's Disease. J Geriatr Psychiatry Neurol 2022; 35:418-433. [PMID: 34044661 DOI: 10.1177/08919887211016056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Alzheimer's disease (AD) patients underperform on a range of tasks requiring semantic processing, but it is unclear whether this impairment is due to a generalised loss of semantic knowledge or to issues in accessing and selecting such information from memory. The objective of this eye-tracking visual search study was to determine whether semantic expectancy mechanisms known to support object recognition in healthy adults are preserved in AD patients. Furthermore, as AD patients are often reported to be impaired in accessing information in extra-foveal vision, we investigated whether that was also the case in our study. Twenty AD patients and 20 age-matched controls searched for a target object among an array of distractors presented extra-foveally. The distractors were either semantically related or unrelated to the target (e.g., a car in an array with other vehicles or kitchen items). Results showed that semantically related objects were detected with more difficulty than semantically unrelated objects by both groups, but more markedly by the AD group. Participants looked earlier and for longer at the critical objects when these were semantically unrelated to the distractors. Our findings show that AD patients can process the semantics of objects and access it in extra-foveal vision. This suggests that their impairments in semantic processing may reflect difficulties in accessing semantic information rather than a generalised loss of semantic memory.
Collapse
Affiliation(s)
- Francesco Cimminella
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, United Kingdom.,Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| | | | - Sergio Della Sala
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Caterina Musella
- Associazione Italiana Malattia d'Alzheimer (AIMA sezione Campania), Naples, Italy
| | - Moreno I Coco
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal.,School of Psychology, The University of East London, London, United Kingdom
| |
Collapse
|
12
|
Perceptual dissimilarity, cognitive and linguistic skills predict novel word retention, but not extension skills in Down syndrome. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
13
|
Language is activated by visual input regardless of memory demands or capacity. Cognition 2022; 222:104994. [PMID: 35016119 PMCID: PMC8898262 DOI: 10.1016/j.cognition.2021.104994] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 12/09/2021] [Accepted: 12/16/2021] [Indexed: 11/23/2022]
Abstract
In the present study, we provide compelling evidence that viewing objects automatically activates linguistic labels and that this activation is not due to task-specific memory demands. In two experiments, eye-movements of English speakers were tracked while they identified a visual target among an array of four images, including a phonological competitor (e.g., flower-flag). Experiment 1 manipulated the capacity to subvocally rehearse the target label by imposing linguistic, spatial, or no working memory load. Experiment 2 manipulated the need to encode target objects by presenting target images either before or concurrently with the search display. While the timing and magnitude of competitor activation varied across conditions, we observed consistent evidence of language activation regardless of the capacity or need to maintain object labels in memory. We propose that language activation is automatic and not contingent upon working memory capacity or demands, and conclude that objects' labels influence visual search.
Collapse
|
14
|
Wang Y, Zang X, Zhang H, Shen W. The Processing of the Second Syllable in Recognizing Chinese Disyllabic Spoken Words: Evidence From Eye Tracking. Front Psychol 2021; 12:681337. [PMID: 34777085 PMCID: PMC8580174 DOI: 10.3389/fpsyg.2021.681337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 10/07/2021] [Indexed: 11/13/2022] Open
Abstract
In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., , xiao3zhuan4, "calligraphy" vs. , gong1zhuan4, "revolution") or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., , yuan2zhu4, "cylinder" vs. , gong1zhuan4, "revolution") with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.
Collapse
Affiliation(s)
- Youxi Wang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Xuelian Zang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Hua Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China
| | - Wei Shen
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| |
Collapse
|
15
|
Marian V, Hayakawa S, Schroeder SR. Memory after visual search: Overlapping phonology, shared meaning, and bilingual experience influence what we remember. BRAIN AND LANGUAGE 2021; 222:105012. [PMID: 34464828 PMCID: PMC8554070 DOI: 10.1016/j.bandl.2021.105012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/19/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
How we remember the things that we see can be shaped by our prior experiences. Here, we examine how linguistic and sensory experiences interact to influence visual memory. Objects in a visual search that shared phonology (cat-cast) or semantics (dog-fox) with a target were later remembered better than unrelated items. Phonological overlap had a greater influence on memory when targets were cued by spoken words, while semantic overlap had a greater effect when targets were cued by characteristic sounds. The influence of overlap on memory varied as a function of individual differences in language experience -- greater bilingual experience was associated with decreased impact of overlap on memory. We conclude that phonological and semantic features of objects influence memory differently depending on individual differences in language experience, guiding not only what we initially look at, but also what we later remember.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States.
| | - Scott R Schroeder
- Department of Speech, Language, Hearing Sciences, Hofstra University, 110, Hempstead, NY 11549, United States
| |
Collapse
|
16
|
I see what you mean: Semantic but not lexical factors modulate image processing in bilingual adults. Mem Cognit 2021; 50:245-260. [PMID: 34462894 DOI: 10.3758/s13421-021-01229-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2021] [Indexed: 11/08/2022]
Abstract
Bilinguals frequently juggle competing representations from their two languages when they interact with their environment (i.e., nonselective activation). As a result, both first (L1) and second language (L2) communication may be impeded when words share orthographic form but not meaning (i.e., interlingual homographs; e.g., CRANE, a machine in English, a skull in French). Similarly, bilinguals' reduced exposure to each known language makes bilingual lexical processing more vulnerable to larger frequency effects. While much is known about processes within the language system, less is known about how the bilingual language system interacts with the visual system, specifically in the context of image processing. We investigated this by testing whether commonly observed semantic (homograph interference) and lexical (frequency) effects extend to a visual word-image matching task. We tested 48 bilinguals, who were asked to determine whether an image corresponded to a written word that was presented immediately beforehand. By modulating the complexity of visual referents and the semantic (Analysis 1) or lexical (Analysis 2) complexity of word cues, we simultaneously burdened the visual and language systems. The results showed that both semantic and lexical factors modulated response accuracy and correct reaction time on the word-image matching task. Crucially, we observed an interaction between the image factor (visual complexity) with the semantic (homograph status) but not the lexical factor (word frequency). We conclude that it is possible for the language and image processing systems to interact, although the extent to which this occurs depends on the degree of linguistic processing involved.
Collapse
|
17
|
Information stored in memory affects abductive reasoning. PSYCHOLOGICAL RESEARCH 2021; 85:3119-3133. [PMID: 33428007 PMCID: PMC8476388 DOI: 10.1007/s00426-020-01460-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 12/07/2020] [Indexed: 11/02/2022]
Abstract
Abductive reasoning describes the process of deriving an explanation from given observations. The theory of abductive reasoning (TAR; Johnson and Krems, Cognitive Science 25:903-939, 2001) assumes that when information is presented sequentially, new information is integrated into a mental representation, a situation model, the central data structure on which all reasoning processes are based. Because working memory capacity is limited, the question arises how reasoning might change with the amount of information that has to be processed in memory. Thus, we conducted an experiment (N = 34) in which we manipulated whether previous observation information and previously found explanations had to be retrieved from memory or were still visually present. Our results provide evidence that people experience differences in task difficulty when more information has to be retrieved from memory. This is also evident in changes in the mental representation as reflected by eye tracking measures. However, no differences are found between groups in the reasoning outcome. These findings suggest that individuals construct their situation model from both information in memory as well as external memory stores. The complexity of the model depends on the task: when memory demands are high, only relevant information is included. With this compensation strategy, people are able to achieve similar reasoning outcomes even when faced with tasks that are more difficult. This implies that people are able to adapt their strategy to the task in order to keep their reasoning successful.
Collapse
|
18
|
Saux G, Vibert N, Dampuré J, Burin DI, Britt MA, Rouet JF. From simple agents to information sources: Readers' differential processing of story characters as a function of story consistency. Acta Psychol (Amst) 2021; 212:103191. [PMID: 33147538 DOI: 10.1016/j.actpsy.2020.103191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 09/29/2020] [Accepted: 10/03/2020] [Indexed: 12/12/2022] Open
Abstract
The study examined how readers integrate information from and about multiple information sources into a memory representation. In two experiments, college students read brief news reports containing two critical statements, each attributed to a source character. In half of the texts, the statements were consistent with each other, in the other half they were discrepant. Each story also featured a non-source character (who made no statement). The hypothesis was that discrepant statements, as compared to consistent statements, would promote distinct attention and memory only for the source characters. Experiment 1 used short interviews to assess participants' ability to recognize the source of one of the statements after reading. Experiment 2 used eye-tracking to collect data during reading and during a source-content recognition task after reading. As predicted, discrepancies only enhanced memory of, and attention to source-related segments of the texts. Discrepancies also enhanced the link between the two source characters in memory as opposed to the non-source character, as indicated by the participants' justifications (Experiment 1) and their visual inspection of the recognition items (Experiment 2). The results are interpreted within current theories of text comprehension and document literacy.
Collapse
Affiliation(s)
- Gaston Saux
- Pontifical Catholic University of Argentina - National Scientific and Technical Research Council (CONICET), Av. Alicia Moreau de Justo 1500, Edif. San José, 2do piso (1107), Buenos Aires, Argentina.
| | - Nicolas Vibert
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS, Université de Poitiers, Université de Tours, MSHS - Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073 Poitiers cedex 9, France
| | - Julien Dampuré
- University of La Laguna - University of La Sabana, Facultad de Psicología, Campus del Puente Común, Km. 7 Autopista Norte de Bogotá, Chía, Cundinamarca, Colombia
| | - Debora I Burin
- University of Buenos Aires - National Scientific and Technical Research Council (CONICET), Lavalle 2353 (1052), Buenos Aires, Argentina
| | - M Anne Britt
- Northern Illinois University, office 363, 100 Normal Rd. DeKalb, IL 60115, USA
| | - Jean-François Rouet
- Centre de Recherches sur la Cognition et l'Apprentissage, CNRS, Université de Poitiers, Université de Tours, MSHS - Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073 Poitiers cedex 9, France
| |
Collapse
|
19
|
Towards Understanding the Task Dependency of Embodied Language Processing: The Influence of Colour During Language-Vision Interactions. J Cogn 2020; 3:41. [PMID: 33134815 PMCID: PMC7583718 DOI: 10.5334/joc.135] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual-world eye-tracking experiments. On critical trials, participants listened to sentence-embedded words associated with a prototypical colour (e.g., ‘…spinach…’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.
Collapse
|
20
|
Hayakawa S, Shook A, Marian V. When It's Harder to Ignorar than to Ignore: Evidence of Greater Attentional Capture from a Non-Dominant Language. THE INTERNATIONAL JOURNAL OF BILINGUALISM : CROSS-DISCIPLINARY, CROSS-LINGUISTIC STUDIES OF LANGUAGE BEHAVIOR 2020; 24:999-1016. [PMID: 33737858 PMCID: PMC7963402 DOI: 10.1177/1367006920915277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
AIMS AND OBJECTIVES Imagine you're driving and you become so distracted by the radio that you miss your turn. Which is more likely to have caught your attention, a broadcast in your native tongue or one in your second language? The present study explores the effect of language proficiency on our ability to inhibit irrelevant phonological information. METHODOLOGY Participants were asked to identify which of two drawings changed color while ignoring irrelevant words in either their native language, English, or a less proficient language, Spanish. The drawings appeared on screen for either 200 or 2000ms prior to word-onset, which was followed 200ms later by a color-change. On critical trials, the irrelevant word shared phonological features with the label of the non-target drawing. Trials were blocked by preview time and language. DATA AND ANALYSIS Reaction time data from 19 bilinguals were analyzed utilizing generalized linear mixed-effects models, with fixed effects of Competition (competitor vs. control), and Language (English vs. Spanish) and random effects for Subject and Item within each preview window. FINDINGS/CONCLUSIONS No interference was observed when participants heard their native tongue in either preview condition. However, participants in the long-preview condition were significantly slower to respond when there was phonological competition in their less proficient language, despite the fact that the task required no language processing. ORIGINALITY Past work has indicated that languages are processed more automatically and cause greater interference as proficiency increases. We propose that though higher-proficiency languages may receive greater activation overall, lower-proficiency languages may be more likely to exogenously capture attention due to both relatively greater salience, and relatively less control. SIGNIFICANCE The present findings have implications for how we understand the dynamic relationship between language proficiency, activation, and inhibition, suggesting that the salience of the less familiar influences our ability to ignore irrelevant information.
Collapse
Affiliation(s)
- Sayuri Hayakawa
- Correspondence concerning this article should be addressed to Dr. Sayuri Hayakawa, 2240 Campus Drive, Evanston, IL 60208.
| | | | | |
Collapse
|
21
|
Freire MR, Pammer K. Influence of culture on visual working memory: evidence of a cultural response bias for remote Australian Indigenous children. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020. [DOI: 10.1007/s41809-020-00063-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
22
|
Bochynska A, Vulchanova M, Vulchanov V, Landau B. Spatial language difficulties reflect the structure of intact spatial representation: Evidence from high-functioning autism. Cogn Psychol 2019; 116:101249. [PMID: 31743869 DOI: 10.1016/j.cogpsych.2019.101249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 10/16/2019] [Accepted: 10/22/2019] [Indexed: 11/24/2022]
Abstract
Previous studies have shown that the basic properties of the visual representation of space are reflected in spatial language. This close relationship between linguistic and non-linguistic spatial systems has been observed both in typical development and in some developmental disorders. Here we provide novel evidence for structural parallels along with a degree of autonomy between these two systems among individuals with Autism Spectrum Disorder, a developmental disorder with uneven cognitive and linguistic profiles. In four experiments, we investigated language and memory for locations organized around an axis-based reference system. Crucially, we also recorded participants' eye movements during the tasks in order to provide new insights into the online processes underlying spatial thinking. Twenty-three intellectually high-functioning individuals with autism (HFA) and 23 typically developing controls (TD), all native speakers of Norwegian matched on chronological age and cognitive abilities, participated in the studies. The results revealed a well-preserved axial reference system in HFA and weakness in the representation of direction within the axis, which was especially evident in spatial language. Performance on the non-linguistic tasks did not differ between HFA and control participants, and we observed clear structural parallels between spatial language and spatial representation in both groups. However, there were some subtle differences in the use of spatial language in HFA compared to TD, suggesting that despite the structural parallels, some aspects of spatial language in HFA deviated from the typical pattern. These findings provide novel insights into the prominence of the axial reference systems in non-linguistic spatial representations and spatial language, as well as the possibility that the two systems are, to some degree, autonomous.
Collapse
Affiliation(s)
- Agata Bochynska
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway; Department of Psychology, New York University, New York, NY, USA.
| | - Mila Vulchanova
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway
| | - Valentin Vulchanov
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway
| | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
23
|
Rahal RM, Fiedler S. Understanding cognitive and affective mechanisms in social psychology through eye-tracking. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2019. [DOI: 10.1016/j.jesp.2019.103842] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
24
|
|
25
|
Arias-Trejo N, Angulo-Chavira AQ, Barrón-Martínez JB. Verb-mediated anticipatory eye movements in people with Down syndrome. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2019; 54:756-766. [PMID: 30983122 DOI: 10.1111/1460-6984.12473] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/10/2019] [Accepted: 03/23/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Children and adults with neurotypical development employ linguistic information to predict and anticipate information. Individuals with Down syndrome (DS) have weaknesses in language production and the domain of grammar but relative strengths in language comprehension and the domain of semantics. What is not clear is the extent to which they can use linguistic information, as it unfolds in real time, to anticipate upcoming information correctly. AIMS To investigate whether children and young people with DS employ verb information to predict and anticipate upcoming linguistic information. METHODS & PROCEDURES A preferential looking task was performed, using an eye-tracker, with children and teenagers with DS and a typically developing (TD) control group matched by sex and mental age (average = 5.48 years). In each of 10 trials, two images were presented, a target and a distractor, while participants heard a phrase that contained a semantically informative verb (e.g., 'eat') or an uninformative verb (e.g., 'see'). OUTCOMES & RESULTS Both DS and TD control participants could anticipate the target upon hearing an informative verb, and prediction skills were positively correlated with mental age in those with DS. CONCLUSIONS & IMPLICATIONS This work demonstrates for the first time that children and teenagers with DS can predict linguistic information based on semantic cues from verbs, and that sentence processing is driven by predictive relationships between verbs and arguments, as in children with typical development. Clinicians can take advantage of these prediction skills, using them in therapy to support weaker areas.
Collapse
Affiliation(s)
- Natalia Arias-Trejo
- Laboratorio de Psicolingüística, Facultad de Psicología, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | | | - Julia B Barrón-Martínez
- Laboratorio de Psicolingüística, Facultad de Psicología, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
26
|
Predicting (variability of) context effects in language comprehension. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2019. [DOI: 10.1007/s41809-019-00025-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
27
|
Goller F, Choi S, Hong U, Ansorge U. Whereof one cannot speak: How language and capture of visual attention interact. Cognition 2019; 194:104023. [PMID: 31445296 DOI: 10.1016/j.cognition.2019.104023] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 07/02/2019] [Accepted: 07/04/2019] [Indexed: 11/18/2022]
Abstract
Our research addresses the important question whether language influences cognition by studying crosslinguistic differences in nonlinguistic visual search tasks. We investigated whether capture of visual attention is mediated by characteristics corresponding to concepts that are differently expressed across different languages. Korean grammatically distinguishes between tight- (kkita) and loose-fit (nehta) containment whereas German collapses them into a single semantic category (in). Although linguistic processing was neither instructed nor necessary to perform the visual search task, we found that Korean speakers showed attention capture by non-instructed but target-coincident (Experiment 1) or distractor-coincident (Experiments 4 and 5) spatial fitness of the stimuli, whereas German speakers were not sensitive to it. As the tight- versus loose-fit distinction is grammaticalized only in the Korean but not the German language, our results demonstrate that language influences which visual features capture attention even in non-linguistic tasks that do not require paying attention to these features. In separate control experiments (Experiments 2 and 3), we ruled out cultural or general cognitive group differences between Korean and German speaking participants as alternative explanations. We outline the mechanisms underlying these crosslinguistic differences in nonlinguistic visual search behaviors. This is the first study showing that linguistic spatial relational concepts held in long-term memory can affect attention capture in visual search tasks.
Collapse
Affiliation(s)
| | - Soonja Choi
- Department of Linguistics and Asian/Middle-Eastern Languages, San Diego State University, United States; Faculty of Philological and Cultural Studies, University of Vienna, Austria
| | - Upyong Hong
- Department of Media and Communication, Konkuk University, Seoul, South Korea
| | | |
Collapse
|
28
|
Dampuré J, Benraiss A, Vibert N. Modulation of parafoveal word processing by cognitive load during modified visual search tasks. Q J Exp Psychol (Hove) 2019; 72:1805-1826. [DOI: 10.1177/1747021818811123] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
During visual search for simple items, the amount of information that can be processed in parafoveal vision depends on the cognitive resources that are available. However, whether this applies to the semantic processing of words remains controversial. This work was designed to manipulate simultaneously two sources of cognitive load to study their impact on the depth of parafoveal word processing during a modified visual search task. The participants had to search for target words among parafoveally presented semantic, orthographic or target-unrelated distractor words while their electroencephalogram was recorded. The task-related load was manipulated by either giving target words in advance (literal task) or giving only a semantic clue to define them (categorical task). The foveal load was manipulated by displaying either a word or hash symbols at the centre of the screen. Parafoveal orthographic and semantic distractors had an impact on the early event-related potential component P2a only in the literal task and when hash symbols were displayed at the fovea, i.e., when both the task-related and foveal loads were low. The data show that all sources of cognitive load must be considered to understand how parafoveal words are processed in visual search contexts.
Collapse
Affiliation(s)
- Julien Dampuré
- Centre de Recherches sur la Cognition et l’Apprentissage, CNRS, Université de Poitiers, Université de Tours, Poitiers, France
- Cognitive Neuroscience & Psycholinguistics Lab and Institute of Biomedical Technologies (IBT), University of La Laguna, La Laguna, Spain
| | - Abdelrhani Benraiss
- Centre de Recherches sur la Cognition et l’Apprentissage, CNRS, Université de Poitiers, Université de Tours, Poitiers, France
| | - Nicolas Vibert
- Centre de Recherches sur la Cognition et l’Apprentissage, CNRS, Université de Poitiers, Université de Tours, Poitiers, France
| |
Collapse
|
29
|
Damjanovic L, Williot A, Blanchette I. Is it dangerous? The role of an emotional visual search strategy and threat-relevant training in the detection of guns and knives. Br J Psychol 2019; 111:275-296. [PMID: 31190378 DOI: 10.1111/bjop.12404] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 04/06/2019] [Indexed: 11/29/2022]
Abstract
Counter-terrorism strategies rely on the assumption that it is possible to increase threat detection by providing explicit verbal instructions to orient people's attention to dangerous objects and hostile behaviours in their environment. Nevertheless, whether verbal cues can be used to enhance threat detection performance under laboratory conditions is currently unclear. In Experiment 1, student participants were required to detect a picture of a dangerous or neutral object embedded within a visual search display on the basis of an emotional strategy 'is it dangerous?' or a semantic strategy 'is it an object?'. The results showed a threat superiority effect that was enhanced by the emotional visual search strategy. In Experiment 2, whilst trainee police officers displayed a greater threat superiority effect than student controls, both groups benefitted from performing the task under the emotional than semantic visual search strategy. Manipulating situational threat levels (high vs. low) in the experimental instructions had no effect on visual search performance. The current findings provide new support for the language-as-context hypothesis. They are also consistent with a dual-processing account of threat detection involving a verbally mediated route in working memory and the deployment of a visual template developed as a function of training.
Collapse
Affiliation(s)
- Ljubica Damjanovic
- School of Natural Sciences and Psychology, Liverpool John Moores University, UK
| | - Alexandre Williot
- Department of Psychology, Université du Québec à Trois-Rivières, Québec, Canada
| | - Isabelle Blanchette
- Department of Psychology, Université du Québec à Trois-Rivières, Québec, Canada
| |
Collapse
|
30
|
Fernandes EG, Coco MI, Branigan HP. When eye fixation might not reflect online ambiguity resolution in the visual-world paradigm: structural priming following multiple primes in Portuguese. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2019. [DOI: 10.1007/s41809-019-00021-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Fan L, Sun M, Xu M, Li Z, Diao L, Zhang X. Multiple representations in visual working memory simultaneously guide attention: The type of memory-matching representation matters. Acta Psychol (Amst) 2019; 192:126-137. [PMID: 30471521 DOI: 10.1016/j.actpsy.2018.11.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 11/13/2018] [Accepted: 11/14/2018] [Indexed: 11/19/2022] Open
Abstract
Whether multiple visual working memory (VWM) representations can simultaneously become active templates to guide attention is controversial. The single-item-template hypothesis argues that only one VWM representation can be active at a time, whereas the multiple-item-template hypothesis argues that multiple VWM templates can simultaneously guide attention. The present study examined the two hypotheses in three (out of four) experiments, using three different types of memory objects: Experiment 1: shapes; Experiment 2: colors; and Experiment 3: colored shapes. Participants were required to hold one (memory-1) or two objects (memory-2) in VWM while performing a tilted line search task. Zero (match-0), one (match-1), or two (match-2) memory stimuli reappeared as distractors in the search array. Guidance effects were found for each type of memory stimuli. More importantly, the guidance effect for memory-2/match-2 trials was significantly larger than that for memory-2/match-1 and memory-1/match-1 trials when holding two colors or two colored shapes in VWM, which is in line with the multiple-item-template hypothesis. However, the pattern of simultaneous guidance effect is not perfectly found for two memory shapes, which may indicate that a reliable simultaneous guidance effect from two representations in VWM can be observed only when the memory-matching stimuli is more effective in guiding attention. Experiment 4 directly compared the guidance effect induced by feature-based matches (partial matching) with the guidance effect induced object-based matches (complete matching) in memory-set-size 2. Reliable guidance effects in match-1 and match-2 trials for object-based matches but not for feature-based matches confirmed the crucial role of the type of memory-matching stimuli in guiding attention.
Collapse
Affiliation(s)
- Lingxia Fan
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Mengdan Sun
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Mengsi Xu
- School of Psychology, Southwest University, Chongqing, China
| | - Zhiai Li
- The School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Liuting Diao
- Academy of Neuroeconomics and Neuromanagement at Ningbo University, Ningbo, China
| | - Xuemin Zhang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China; National Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, China.
| |
Collapse
|
32
|
Words affect visual perception by activating object shape representations. Sci Rep 2018; 8:14156. [PMID: 30237542 PMCID: PMC6148044 DOI: 10.1038/s41598-018-32483-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 09/07/2018] [Indexed: 11/08/2022] Open
Abstract
Linguistic labels are known to facilitate object recognition, yet the mechanism of this facilitation is not well understood. Previous psychophysical studies have suggested that words guide visual perception by activating information about visual object shape. Here we aimed to test this hypothesis at the neural level, and to tease apart the visual and semantic contribution of words to visual object recognition. We created a set of object pictures from two semantic categories with varying shapes, and obtained subjective ratings of their shape and category similarity. We then conducted a word-picture matching experiment, while recording participants’ EEG, and tested if the shape or the category similarity between the word’s referent and target picture explained the spatiotemporal pattern of the picture-evoked responses. The results show that hearing a word activates representations of its referent’s shape, which interacts with the visual processing of a subsequent picture within 100 ms from its onset. Furthermore, non-visual categorical information, carried by the word, affects the visual processing at later stages. These findings advance our understanding of the interaction between language and visual perception and provide insights into how the meanings of words are represented in the brain.
Collapse
|
33
|
Kreysa H, Nunnemann EM, Knoeferle P. Distinct effects of different visual cues on sentence comprehension and later recall: The case of speaker gaze versus depicted actions. Acta Psychol (Amst) 2018; 188:220-229. [PMID: 29858107 DOI: 10.1016/j.actpsy.2018.05.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Revised: 01/15/2018] [Accepted: 05/05/2018] [Indexed: 11/17/2022] Open
Abstract
Language-processing accounts are beginning to accommodate different visual context effects, but they remain underspecified regarding differences between cues, both during sentence comprehension and subsequent recall. We monitored participants' eye movements to mentioned characters while they listened to transitive sentences. We varied whether speaker gaze, a depicted action, neither, or both of these visual cues were available, as well as whether both cues were deictic (Experiment 1) or only speaker gaze (Experiment 2). Speaker gaze affected eye movements during comprehension similarly early to a single deictic action depiction, but significantly earlier than non-deictic action depictions; conversely, depicted actions but not speaker gaze positively affected later recall of sentence content. Thus, cue type and cue-language relations must be accommodated in characterising real-time situated language comprehension and subsequent recall of sentence content.
Collapse
Affiliation(s)
- Helene Kreysa
- Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany.
| | - Eva M Nunnemann
- Cognitive Interaction Technology Center of Excellence, Bielefeld University, Bielefeld, Germany.
| | - Pia Knoeferle
- Department of German Studies and Linguistics, Humboldt-Universität zu Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany; Einstein Center for Neurosciences Berlin, Germany.
| |
Collapse
|
34
|
Watching diagnoses develop: Eye movements reveal symptom processing during diagnostic reasoning. Psychon Bull Rev 2018; 24:1398-1412. [PMID: 28444634 DOI: 10.3758/s13423-017-1294-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Finding a probable explanation for observed symptoms is a highly complex task that draws on information retrieval from memory. Recent research suggests that observed symptoms are interpreted in a way that maximizes coherence for a single likely explanation. This becomes particularly clear if symptom sequences support more than one explanation. However, there are no existing process data available that allow coherence maximization to be traced in ambiguous diagnostic situations, where critical information has to be retrieved from memory. In this experiment, we applied memory indexing, an eye-tracking method that affords rich time-course information concerning memory-based cognitive processing during higher order thinking, to reveal symptom processing and the preferred interpretation of symptom sequences. Participants first learned information about causes and symptoms presented in spatial frames. Gaze allocation to emptied spatial frames during symptom processing and during the diagnostic response reflected the subjective status of hypotheses held in memory and the preferred interpretation of ambiguous symptoms. Memory indexing traced how the diagnostic decision developed and revealed instances of hypothesis change and biases in symptom processing. Memory indexing thus provided direct online evidence for coherence maximization in processing ambiguous information.
Collapse
|
35
|
Covert shifts of attention can account for the functional role of “eye movements to nothing”. Mem Cognit 2017; 46:230-243. [DOI: 10.3758/s13421-017-0760-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
36
|
Souza AS, Skóra Z. The interplay of language and visual perception in working memory. Cognition 2017; 166:277-297. [PMID: 28595141 DOI: 10.1016/j.cognition.2017.05.038] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Revised: 05/17/2017] [Accepted: 05/30/2017] [Indexed: 11/29/2022]
Abstract
How do perception and language interact to form the representations that guide our thoughts and actions over the short-term? Here, we provide a first examination of this question by investigating the role of verbal labels in a continuous visual working memory (WM) task. Across four experiments, participants retained in memory the continuous color of a set of dots which were presented sequentially (Experiments 1-3) or simultaneously (Experiment 4). At test, they reproduced the colors of all dots using a color wheel. During stimulus presentation participants were required to either label the colors (color labeling) or to repeat "bababa" aloud (articulatory suppression), hence prompting or preventing verbal labeling, respectively. We tested four competing hypotheses of the labeling effect: (1) labeling generates a verbal representation that overshadows the visual representation; (2) labeling yields a verbal representation in addition to the visual one; (3) the labels function as a retrieval cue, adding distinctiveness to items in memory; and (4) labels activate visual categorical representations in long-term memory. Collectively, our experiments show that labeling does not overshadow the visual input; it augments it. Mixture modeling showed that labeling increased the quantity and quality of information in WM. Our findings are consistent with the hypothesis that labeling activates visual long-term categorical representations which help in reducing the noise in the internal representations of the visual stimuli in WM.
Collapse
Affiliation(s)
| | - Zuzanna Skóra
- Institute of Psychology, Jagiellonian University, Poland
| |
Collapse
|
37
|
de Groot F, Huettig F, Olivers CNL. Language-induced visual and semantic biases in visual search are subject to task requirements. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1324934] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Floor de Groot
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Christian N. L. Olivers
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
- Institute for Brain and Behaviour, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
38
|
Lara Diaz MF, Beltrán Rojas JC, Rodriguez Montoya SR, Arias Castro DM, Araque Jaramillo SM. ANALISIS DE LA PERCEPCION DE EVENTOS ESTATICOS Y DINAMICOS EN PERSONAS CON ENFERMEDAD DE ALZHEIMER. UNIVERSITAS PSYCHOLOGICA 2017. [DOI: 10.11144/javeriana.upsy15-5.apee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Uno de los aspectos afectados en la Enfermedad de Alzheimer es el lenguaje. La naturaleza y manifestaciones de estas dificultades se relacionan con la forma como los pacientes con EA perciben y comprenden el mundo que les rodea.
En la presente investigación se analizaron las fijaciones visuales de siete pacientes con EA y sus controles durante tareas de percepción de escenas estáticas y dinámicas (imagen y video respectivamente). De igual forma se analizó una muestra de lenguaje producida por los pacientes al narrar el evento dinámico.
Los resultados indican diferencias significativas en cuanto a la búsqueda visual, en la cual el grupo con EA presento disminución de la velocidad. En tareas de rastreo se evidencia que las personas con EA identifican menos elementos en una imagen realizando menos fijaciones, con estrategias de exploración poco eficientes. En cuanto al evento dinámico, el rastreo visual fue similar entre los dos grupos sin embargo la expresión lingüística de lo observado está afectada en el grupo con EA revelando la relación de la percepción y el lenguaje ya que a pesar de observar los eventos dentro de una escena en movimiento, estos no son recobrados posteriormente para ser expresados lingüísticamente. Estos resultados tienen importantes implicaciones tanto en la identificación de la naturaleza de las dificultades lingüísticas en la EA como en el manejo de la misma.
Collapse
|
39
|
Zhou W, Mo F, Zhang Y, Ding J. Semantic and Syntactic Associations During Word Search Modulate the Relationship Between Attention and Subsequent Memory. The Journal of General Psychology 2017; 144:69-88. [PMID: 28098521 DOI: 10.1080/00221309.2016.1258389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were conducted to investigate how linguistic information influences attention allocation in visual search and memory for words. In Experiment 1, participants searched for the synonym of a cue word among five words. The distractors included one antonym and three unrelated words. In Experiment 2, participants were asked to judge whether the five words presented on the screen comprise a valid sentence. The relationships among words were sentential, semantically related or unrelated. A memory recognition task followed. Results in both experiments showed that linguistically related words produced better memory performance. We also found that there were significant interactions between linguistic relation conditions and memorization on eye-movement measures, indicating that good memory for words relied on frequent and long fixations during search in the unrelated condition but to a much lesser extent in linguistically related conditions. We conclude that semantic and syntactic associations attenuate the link between overt attention allocation and subsequent memory performance, suggesting that linguistic relatedness can somewhat compensate for a relative lack of attention during word search.
Collapse
Affiliation(s)
| | - Fei Mo
- a Capital Normal University
| | | | | |
Collapse
|
40
|
O'Hanlon CG, Read JCA. Blindness to background: an inbuilt bias for visual objects. Dev Sci 2016; 20. [PMID: 27873433 DOI: 10.1111/desc.12478] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2014] [Accepted: 07/04/2016] [Indexed: 11/29/2022]
Abstract
Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.].
Collapse
|
41
|
de Groot F, Huettig F, Olivers CNL. Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1221013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
42
|
Modulation of scene consistency and task demand on language-driven eye movements for audio-visual integration. Acta Psychol (Amst) 2016; 171:1-16. [PMID: 27640139 DOI: 10.1016/j.actpsy.2016.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2015] [Revised: 09/08/2016] [Accepted: 09/11/2016] [Indexed: 11/23/2022] Open
Abstract
Previous psycholinguistic studies have demonstrated that people tend to direct fixations toward the visual object to which spoken input refers during language comprehension. However, it is still unclear how the visual scene, especially the semantic consistency between object and background, affects the word-object mapping process during comprehension. Two visual world paradigm experiments were conducted to investigate how the scene consistency dynamically influenced the language-driven eye movements in a speech comprehension and a scene comprehension task. In each trial, participants listened to a spoken sentence while viewing a picture with two critical objects: one is the mentioned target object (e.g., tiger), which was embedded in either a consistent (e.g., field), inconsistent (e.g., sky) or blank background; the other is an unmentioned non-target object (e.g., eagle), which was always consistent with its background. The results showed that the fixation proportion of the inconsistent target was higher than the consistent target, and the task demand can affect the strength and the direction of the inconsistency effect before and after the target had been mentioned. In summary, the spoken language, scene-based knowledge and task demand were intertwined to determine eye movements during audio-visual integration for comprehension.
Collapse
|
43
|
Seckin M, Mesulam MM, Voss JL, Huang W, Rogalski EJ, Hurley RS. Am I looking at a cat or a dog? Gaze in the semantic variant of primary progressive aphasia is subject to excessive taxonomic capture. JOURNAL OF NEUROLINGUISTICS 2016; 37:68-81. [PMID: 26500393 PMCID: PMC4612367 DOI: 10.1016/j.jneuroling.2015.09.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Object naming impairments or anomias are the most frequent symptom in aphasia, and can be caused by a variety of underlying neurocognitive mechanisms. Anomia in neurodegenerative or primary progressive aphasias (PPA) often appears to be based on taxonomic blurring of word meaning: words such as "dog" and "cat" are still recognized generically as referring to animals, but are no longer conceptually differentiated from each other, leading to coordinate errors in word-object matching. This blurring is the hallmark symptom of the "semantic variant" of PPA, who invariably show focal atrophy in the left anterior temporal lobe. In this study we used eye tracking to characterize information processing online (in real time) as non-aphasic controls, semantic and non-semantic PPA participants completed a word-to-object matching task. All participants (including controls) showed taxonomic capture of gaze, spending more time viewing foils that were from the same category as the target compared to unrelated foils, but capture was more extreme in the semantic PPA group. The semantic group showed heightened capture even on trials where they ultimately pointed to the correct target, demonstrating the superiority of eye movements over traditional testing methods in detecting subtle processing impairments. Heightened capture was primarily driven by a tendency to direct gaze back and forth, repeatedly, between a set of related foils on each trial, a behavior almost never shown by controls or non-semantic participants. This suggests semantic PPA participants were accumulating and weighing evidence for a probabilistic rather than definitive mapping between the noun and several candidate objects. Neurodegeneration in PPA thus appears to distort lexical concepts prior to extinguishing them altogether, causing uncertainty in recognition and word-object matching.
Collapse
Affiliation(s)
- Mustafa Seckin
- Cognitive Neurology and Alzheimer’s Disease Center, Northwestern University, Chicago, IL
| | - M.-Marsel Mesulam
- Cognitive Neurology and Alzheimer’s Disease Center, Northwestern University, Chicago, IL
- Department of Neurology, Northwestern University, Chicago, IL
| | - Joel L. Voss
- Department of Neurology, Northwestern University, Chicago, IL
- Department of Medical Social Sciences, Northwestern University, Chicago, IL
| | - Wei Huang
- Cognitive Neurology and Alzheimer’s Disease Center, Northwestern University, Chicago, IL
| | - Emily J. Rogalski
- Cognitive Neurology and Alzheimer’s Disease Center, Northwestern University, Chicago, IL
| | - Robert S. Hurley
- Cognitive Neurology and Alzheimer’s Disease Center, Northwestern University, Chicago, IL
- Department of Neurology, Northwestern University, Chicago, IL
| |
Collapse
|
44
|
Chabal S, Marian V. Speakers of different languages process the visual world differently. J Exp Psychol Gen 2016; 144:539-50. [PMID: 26030171 DOI: 10.1037/xge0000075] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed.
Collapse
Affiliation(s)
- Sarah Chabal
- Department of Communication Sciences and Disorders, Northwestern University
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University
| |
Collapse
|
45
|
Seckin M, Mesulam MM, Rademaker AW, Voss JL, Weintraub S, Rogalski EJ, Hurley RS. Eye movements as probes of lexico-semantic processing in a patient with primary progressive aphasia. Neurocase 2016; 22:65-75. [PMID: 25982291 PMCID: PMC4651860 DOI: 10.1080/13554794.2015.1045523] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Eye movement trajectories during a verbally cued object search task were used as probes of lexico-semantic associations in an anomic patient with primary progressive aphasia. Visual search was normal on trials where the target object could be named but became lengthy and inefficient on trials where the object failed to be named. The abnormality was most profound if the noun denoting the object could not be recognized. Even trials where the name of the target object was recognized but not retrieved triggered abnormal eye movements, demonstrating that retrieval failures can have underlying associative components despite intact comprehension of the corresponding noun.
Collapse
Affiliation(s)
- Mustafa Seckin
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA
| | - M-Marsel Mesulam
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA
| | - Alfred W Rademaker
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA.,b Department of Preventive Medicine , Northwestern University , Chicago , IL , USA
| | - Joel L Voss
- c Department of Medical Social Sciences , Northwestern University , Chicago , IL , USA
| | - Sandra Weintraub
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA
| | - Emily J Rogalski
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA
| | - Robert S Hurley
- a Cognitive Neurology and Alzheimer's Disease Center , Northwestern University , Chicago , IL , USA
| |
Collapse
|
46
|
Dampure J, Benraiss A, Vibert N. Task-dependent modulation of word processing mechanisms during modified visual search tasks. Q J Exp Psychol (Hove) 2015; 69:1145-63. [PMID: 26176489 DOI: 10.1080/17470218.2015.1070886] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
During visual search for words, the impact of the visual and semantic features of words varies as a function of the search task. This event-related potential (ERP) study focused on the way these features of words are used to detect similarities between the distractor words that are glanced at and the target word, as well as to then reject the distractor words. The participants had to search for a target word that was either given literally or defined by a semantic clue among words presented sequentially. The distractor words included words that resembled the target and words that were semantically related to the target. The P2a component was the first component to be modulated by the visual and/or semantic similarity of distractors to the target word, and these modulations varied according to the task. The same held true for the later N300 and N400 components, which confirms that, depending on the task, distinct processing pathways were sensitized through attentional modulation. Hence, the process that matches what is perceived with the target acts during the first 200 ms after word presentation, and both early detection and late rejection processes of words depend on the search task and on the representation of the target stored in memory.
Collapse
Affiliation(s)
- Julien Dampure
- a Centre de Recherches sur la Cognition et l'Apprentissage , Université de Poitiers, Université François Rabelais de Tours, CNRS UMR 7295, Maison des Sciences de l'Homme et de la Société , Poitiers , France
| | - Abdelrhani Benraiss
- a Centre de Recherches sur la Cognition et l'Apprentissage , Université de Poitiers, Université François Rabelais de Tours, CNRS UMR 7295, Maison des Sciences de l'Homme et de la Société , Poitiers , France
| | - Nicolas Vibert
- a Centre de Recherches sur la Cognition et l'Apprentissage , Université de Poitiers, Université François Rabelais de Tours, CNRS UMR 7295, Maison des Sciences de l'Homme et de la Société , Poitiers , France
| |
Collapse
|
47
|
Scholz A, von Helversen B, Rieskamp J. Eye movements reveal memory processes during similarity- and rule-based decision making. Cognition 2015; 136:228-46. [DOI: 10.1016/j.cognition.2014.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 11/13/2014] [Accepted: 11/17/2014] [Indexed: 11/25/2022]
|
48
|
Huettig F. Four central questions about prediction in language processing. Brain Res 2015; 1626:118-35. [PMID: 25708148 DOI: 10.1016/j.brainres.2015.02.014] [Citation(s) in RCA: 139] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Revised: 02/05/2015] [Accepted: 02/07/2015] [Indexed: 11/19/2022]
Abstract
The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- Falk Huettig
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH Nijmegen, The Netherlands; Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
49
|
Burigo M, Knoeferle P. Visual attention during spatial language comprehension. PLoS One 2015; 10:e0115758. [PMID: 25607540 PMCID: PMC4301815 DOI: 10.1371/journal.pone.0115758] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 12/01/2014] [Indexed: 11/18/2022] Open
Abstract
Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations.
Collapse
Affiliation(s)
- Michele Burigo
- Department of Linguistics, University of Bielefeld, Bielefeld, Germany and Language & Cognition Research Group, Cognitive Interaction Technology—Center of Excellence (CITEC), University of Bielefeld, Bielefeld, Germany
- * E-mail:
| | - Pia Knoeferle
- Department of Linguistics, University of Bielefeld, Bielefeld, Germany and Language & Cognition Research Group, Cognitive Interaction Technology—Center of Excellence (CITEC), University of Bielefeld, Bielefeld, Germany
| |
Collapse
|
50
|
Scholz A, Mehlhorn K, Krems JF. Listen up, eye movements play a role in verbal memory retrieval. PSYCHOLOGICAL RESEARCH 2014; 80:149-58. [DOI: 10.1007/s00426-014-0639-4] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Accepted: 12/06/2014] [Indexed: 11/24/2022]
|