1
|
Senftleben U, Kruse J, Scherbaum S, Korb FM. You Eat with Your Eyes: Framing of Food Choice Options Affects Decision Conflict and Visual Attention in Food Choice Task. Nutrients 2024; 16:3343. [PMID: 39408310 PMCID: PMC11478952 DOI: 10.3390/nu16193343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 09/27/2024] [Accepted: 09/28/2024] [Indexed: 10/20/2024] Open
Abstract
Background/Objectives: Frequent poor dietary choices can have significant consequences. To understand the underlying decision-making processes, most food choice tasks present a binary choice between a tasty but less healthy option and a healthy but less tasty option. It is assumed that people come to a decision by trading off the respective health and taste values. However, it is unclear whether and to what extent food choice goes beyond this. Methods: We use a novel eye-tracking experiment where we compare a typical food choice task (image condition) with an abstract value-based decision-making task using pre-matched percentages of health and taste (text condition; e.g., 10% healthy and 80% tasty) in 78 participants. Results: We find a higher frequency of unhealthy choices and reduced response times in the image condition compared to the text condition, suggesting more impulsive decision making. The eye-tracking analysis shows that, in the text condition, the item corresponding to the subsequent choice receives more attention than the alternative option, whereas in the image condition this only applies to the healthy item. Conclusions: Our findings suggest that decision-making in typical food choice tasks goes beyond a mere value-based trade-off. These differences could be due to the involvement of different attentional processes in typical food choice tasks or due to the modality of stimulus presentation. These results could help to understand why people prefer tasty but unhealthy food options even when health is important to them.
Collapse
Affiliation(s)
- Ulrike Senftleben
- Faculty of Psychology, TUD Dresden University of Technology, 01062 Dresden, Germany; (J.K.); (S.S.); (F.M.K.)
| | | | | | | |
Collapse
|
2
|
Antal C, de Almeida RG. Grasping the Concept of an Object at a Glance: Category Information Accessed by Brief Dichoptic Presentation. Cogn Sci 2024; 48:e70002. [PMID: 39428757 DOI: 10.1111/cogs.70002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 08/14/2024] [Accepted: 10/01/2024] [Indexed: 10/22/2024]
Abstract
What type of conceptual information about an object do we get at a brief glance? In two experiments, we investigated the nature of conceptual tokening-the moment at which conceptual information about an object is accessed. Using a masked picture-word congruency task with dichoptic presentations at "brief" (50-60 ms) and "long" (190-200 ms) durations, participants judged the relation between a picture (e.g., a banana) and a word representing one of four property types about the object: superordinate (fruit), basic level (banana), a high-salient (yellow), or low-salient feature (peel). In Experiment 1, stimuli were presented in black-and-white; in Experiment 2, they were presented in red and blue, with participants wearing red-blue anaglyph glasses. This manipulation allowed for the independent projection of stimuli to the left- and right-hemisphere visual areas, aiming to probe the early effects of these projections in conceptual tokening. Results showed that superordinate and basic-level properties elicited faster and more accurate responses than high- and low-salient features at both presentation times. This advantage persisted even when the objects were divided into categories (e.g., animals, vegetables, vehicles, tools), and when objects contained high-salient visual features. However, contrasts between categories show that animals, fruits, and vegetables tend to be categorized at the superordinate level, while vehicles tend to be categorized at the basic level. Also, for a restricted class of objects, high-salient features representing diagnostic color information (yellow for the picture of a banana) facilitated congruency judgments to the same extent as that of superordinate and basic-level labels. We suggest that early access to object concepts yields superordinate and basic-level information, with features only yielding effects at a later stage of processing, unless they represent diagnostic color information. We discuss these results advancing a unified theory of conceptual representation, integrating key postulates of atomism and feature-based theories.
Collapse
Affiliation(s)
- Caitlyn Antal
- Department of Psychology, McGill University
- Department of Psychology, Concordia University
| | | |
Collapse
|
3
|
Kruse J, Senftleben U, Scherbaum S, Korb FM. A picture is worth a thousand words: Framing of food choice options affects decision conflict and mid-fontal theta in food choice task. Appetite 2024; 201:107616. [PMID: 39098082 DOI: 10.1016/j.appet.2024.107616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/27/2024] [Accepted: 07/29/2024] [Indexed: 08/06/2024]
Abstract
In food choices, conflict arises when choosing between a healthy, but less tasty food item and a tasty, but less healthy food item. The underlying assumption is that people trade-off the health and taste properties of food items to reach a decision. To probe this assumption, we presented food items either as colored images (image condition, e.g. photograph of a granola bar) or as pre-matched percentages of taste and health values (text condition, e.g., 20% healthy and 80% tasty). We recorded choices, response times and electroencephalography activity to calculate mid-frontal theta power as a marker of conflict. At the behavioral level, we found higher response times for healthy compared to unhealthy choices, and for difficult compared to easy decisions in both conditions, indicating the experience of a decision conflict. At the neural level, mid-frontal theta power was higher for healthy choices than unhealthy choices and difficult choices compared to easy choices, but only in the image condition. Those results suggest that either conflict type and/or decision strategies differ between the image and text conditions. The present results can be helpful in understanding how dietary decisions can be influenced towards healthier food choices.
Collapse
Affiliation(s)
- Johanna Kruse
- Department of Psychology, TUD Dresden University of Technology, Dresden, Germany.
| | - Ulrike Senftleben
- Department of Psychology, TUD Dresden University of Technology, Dresden, Germany.
| | - Stefan Scherbaum
- Department of Psychology, TUD Dresden University of Technology, Dresden, Germany.
| | - Franziska M Korb
- Department of Psychology, TUD Dresden University of Technology, Dresden, Germany.
| |
Collapse
|
4
|
Kallmayer A, Võ MLH. Anchor objects drive realism while diagnostic objects drive categorization in GAN generated scenes. COMMUNICATIONS PSYCHOLOGY 2024; 2:68. [PMID: 39242968 PMCID: PMC11332195 DOI: 10.1038/s44271-024-00119-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 07/15/2024] [Indexed: 09/09/2024]
Abstract
Our visual surroundings are highly complex. Despite this, we understand and navigate them effortlessly. This requires transforming incoming sensory information into representations that not only span low- to high-level visual features (e.g., edges, object parts, objects), but likely also reflect co-occurrence statistics of objects in real-world scenes. Here, so-called anchor objects are defined as being highly predictive of the location and identity of frequently co-occuring (usually smaller) objects, derived from object clustering statistics in real-world scenes, while so-called diagnostic objects are predictive of the larger semantic context (i.e., scene category). Across two studies (N1 = 50, N2 = 44), we investigate which of these properties underlie scene understanding across two dimensions - realism and categorisation - using scenes generated from Generative Adversarial Networks (GANs) which naturally vary along these dimensions. We show that anchor objects and mainly high-level features extracted from a range of pre-trained deep neural networks (DNNs) drove realism both at first glance and after initial processing. Categorisation performance was mainly determined by diagnostic objects, regardless of realism, at first glance and after initial processing. Our results are testament to the visual system's ability to pick up on reliable, category specific sources of information that are flexible towards disturbances across the visual feature-hierarchy.
Collapse
Affiliation(s)
- Aylin Kallmayer
- Goethe University Frankfurt, Department of Psychology, Frankfurt am Main, Germany.
| | - Melissa L-H Võ
- Goethe University Frankfurt, Department of Psychology, Frankfurt am Main, Germany
| |
Collapse
|
5
|
Ahmad FN, Tremblay S, Karkuszewski MD, Alvi M, Hockley WE. A conceptual-perceptual distinctiveness processing account of the superior recognition memory of pictures over environmental sounds. Q J Exp Psychol (Hove) 2024; 77:1555-1580. [PMID: 37705452 PMCID: PMC11181738 DOI: 10.1177/17470218231202986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/27/2023] [Accepted: 09/02/2023] [Indexed: 09/15/2023]
Abstract
Researchers have proposed a coarser or gist-based representation for sounds, whereas a more verbatim-based representation is retrieved from long-term memory to account for higher recognition performance for pictures. This study examined the mechanism for the recognition advantage for pictures. In Experiment 1A, pictures and sounds were presented in separate trials in a mixed list during the study phase and participants showed in a yes-no test, a higher proportion of correct responses for targets, exemplar foils categorically related to the target, and novel foils for pictures compared with sounds. In Experiment 1B, the picture recognition advantage was replicated in a two-alternative forced-choice test for the novel and exemplar foil conditions. For Experiment 2A, even when verbal labels (i.e., written labels) were presented for sounds during the study phase, a recognition advantage for pictures was shown for both targets and exemplar foils. Experiment 2B showed that the presence of written labels for sounds, during both the study and test phases did not eliminate the advantage of recognition of pictures in terms of correct rejection of exemplar foils. Finally, in two additional experiments, we examined whether the degree of similarity within pictures and sounds could account for the recognition advantage of pictures. The mean similarity rating for pictures was higher than the mean similarity rating for sounds in the exemplar test condition, whereas mean similarity rating for sounds was higher than pictures in the novel test condition. These results pose a challenge for some versions of distinctiveness accounts of the picture superiority effect. We propose a conceptual-perceptual distinctiveness processing account of recognition memory for pictures and sounds.
Collapse
Affiliation(s)
- Fahad N Ahmad
- Department of Psychology, Wilfrid Laurier University, Waterloo, ON, Canada
| | - Savannah Tremblay
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute at Baycrest, Toronto, ON, Canada
| | | | - Marium Alvi
- Department of Psychology, York University, Toronto, ON, Canada
| | - William E Hockley
- Department of Psychology, Wilfrid Laurier University, Waterloo, ON, Canada
| |
Collapse
|
6
|
Benn Y, Ivanova AA, Clark O, Mineroff Z, Seikus C, Silva JS, Varley R, Fedorenko E. The language network is not engaged in object categorization. Cereb Cortex 2023; 33:10380-10400. [PMID: 37557910 PMCID: PMC10545444 DOI: 10.1093/cercor/bhad289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/11/2023] Open
Abstract
The relationship between language and thought is the subject of long-standing debate. One claim states that language facilitates categorization of objects based on a certain feature (e.g. color) through the use of category labels that reduce interference from other, irrelevant features. Therefore, language impairment is expected to affect categorization of items grouped by a single feature (low-dimensional categories, e.g. "Yellow Things") more than categorization of items that share many features (high-dimensional categories, e.g. "Animals"). To test this account, we conducted two behavioral studies with individuals with aphasia and an fMRI experiment with healthy adults. The aphasia studies showed that selective low-dimensional categorization impairment was present in some, but not all, individuals with severe anomia and was not characteristic of aphasia in general. fMRI results revealed little activity in language-responsive brain regions during both low- and high-dimensional categorization; instead, categorization recruited the domain-general multiple-demand network (involved in wide-ranging cognitive tasks). Combined, results demonstrate that the language system is not implicated in object categorization. Instead, selective low-dimensional categorization impairment might be caused by damage to brain regions responsible for cognitive control. Our work adds to the growing evidence of the dissociation between the language system and many cognitive tasks in adults.
Collapse
Affiliation(s)
- Yael Benn
- Department of Psychology, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Anna A Ivanova
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Oliver Clark
- Department of Psychology, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Zachary Mineroff
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Chloe Seikus
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Jack Santos Silva
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Rosemary Varley
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| |
Collapse
|
7
|
Exner A, Machulska A, Stalder T, Klucken T. Biased information processing and anxiety coping: differences in attentional and approach patterns towards positive cues in repressors. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-04087-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
AbstractIndividual differences in emotional coping styles are likely to affect information processing on different stages. Repressive coping is assumed to be related to an attentional bias away from threatening information. Possible links to biases in later stages of information processing have not been investigated to date. In the current study, 82 participants completed the visual dot-probe task as a measure of attentional bias and the Approach-Avoidance Task (AAT) as a measure of approach/avoidance bias and classified into coping groups via the Mainz Coping Inventory (MCI). Prevalence of attention bias and approach/avoidance bias were compared between groups. Main results revealed a strong approach tendency toward positive stimuli for repressors and a strong avoidance tendency for sensitizers. No group differences were found for approach bias to negative stimuli or for attention bias. The present findings of strong preferential processing of positive stimuli in repressors may be part of broader information processing alterations, which may also be linked to alterations in emotion processing.
Collapse
|
8
|
Wolfe JM, Kosovicheva A, Wolfe B. Normal blindness: when we Look But Fail To See. Trends Cogn Sci 2022; 26:809-819. [PMID: 35872002 PMCID: PMC9378609 DOI: 10.1016/j.tics.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
Abstract
Humans routinely miss important information that is 'right in front of our eyes', from overlooking typos in a paper to failing to see a cyclist in an intersection. Recent studies on these 'Looked But Failed To See' (LBFTS) errors point to a common mechanism underlying these failures, whether the missed item was an unexpected gorilla, the clearly defined target of a visual search, or that simple typo. We argue that normal blindness is the by-product of the limited-capacity prediction engine that is our visual system. The processes that evolved to allow us to move through the world with ease are virtually guaranteed to cause us to miss some significant stimuli, especially in important tasks like driving and medical image perception.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02215, USA; Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
| | - Anna Kosovicheva
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| | - Benjamin Wolfe
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| |
Collapse
|
9
|
Sommerfeld L, Staudte M, Kray J. Ratings of name agreement and semantic categorization of 247 colored clipart pictures by young German children. Acta Psychol (Amst) 2022; 226:103558. [PMID: 35439618 DOI: 10.1016/j.actpsy.2022.103558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 03/02/2022] [Accepted: 03/06/2022] [Indexed: 11/01/2022] Open
Abstract
Developmental and longitudinal studies with children increasingly use pictorial stimuli in cognitive, psychologic, and psycholinguistic research. To enhance validity and comparability within and across those studies, the use of normed pictures is recommended. Besides, creating picture sets and evaluating them in rating studies is very time consuming, in particular regarding samples of young children in which testing time is rather limited. As there is an increasing number of studies that investigate young German children's semantic language processing with colored clipart stimuli, this work provides a first set of 247 colored cliparts with ratings of German native speaking children aged 4 to 6 years. We assessed two central rating aspects of pictures: Name agreement (Do pictures elicit the intended name of an object?) and semantic categorization (Are objects classified as members of the intended semantic category?). Our ratings indicate that children are proficient in naming and even better in semantic categorization of objects, whereas both seems to improve with increasing age of young childhood. Finally, this paper discusses some features of pictorial objects that might be important for children's name agreement and semantic categorization and could be considered in future picture rating studies.
Collapse
|
10
|
Hedayati S, O'Donnell RE, Wyble B. A model of working memory for latent representations. Nat Hum Behav 2022; 6:709-719. [PMID: 35115675 DOI: 10.1038/s41562-021-01264-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 11/29/2021] [Indexed: 11/09/2022]
Abstract
We propose a mechanistic explanation of how working memories are built and reconstructed from the latent representations of visual knowledge. The proposed model features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that links latent space activities to tokenized representations. The simulation results revealed that new pictures of familiar types of items can be encoded and retrieved efficiently from higher levels of the visual hierarchy, whereas truly novel patterns are better stored using only early layers. Moreover, a given stimulus in working memory can have multiple codes, which allows representation of visual detail in addition to categorical information. Finally, we validated our model's assumptions by testing a series of predictions against behavioural results obtained from working memory tasks. The model provides a demonstration of how visual knowledge yields compact visual representation for efficient memory encoding.
Collapse
Affiliation(s)
- Shekoofeh Hedayati
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA.
| | - Ryan E O'Donnell
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
11
|
Neudorf J, Gould L, Mickleborough MJS, Ekstrand C, Borowsky R. Unique, Shared, and Dominant Brain Activation in Visual Word Form Area and Lateral Occipital Complex during Reading and Picture Naming. Neuroscience 2022; 481:178-196. [PMID: 34800577 DOI: 10.1016/j.neuroscience.2021.11.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 11/08/2021] [Accepted: 11/10/2021] [Indexed: 11/17/2022]
Abstract
Identifying printed words and pictures concurrently is ubiquitous in daily tasks, and so it is important to consider the extent to which reading words and naming pictures may share a cognitive-neurophysiological functional architecture. Two functional magnetic resonance imaging (fMRI) experiments examined whether reading along the left ventral occipitotemporal region (vOT; often referred to as a visual word form area, VWFA) has activation that is overlapping with referent pictures (i.e., both conditions significant and shared, or with one significantly more dominant) or unique (i.e., one condition significant, the other not), and whether picture naming along the right lateral occipital complex (LOC) has overlapping or unique activation relative to referent words. Experiment 1 used familiar regular and exception words (to force lexical reading) and their corresponding pictures in separate naming blocks, and showed dominant activation for pictures in the LOC, and shared activation in the VWFA for exception words and their corresponding pictures (regular words did not elicit significant VWFA activation). Experiment 2 controlled for visual complexity by superimposing the words and pictures and instructing participants to either name the word or the picture, and showed primarily shared activation in the VWFA and LOC regions for both word reading and picture naming, with some dominant activation for pictures in the LOC. Overall, these results highlight the importance of including exception words to force lexical reading when comparing to picture naming, and the significant shared activation in VWFA and LOC serves to challenge specialized models of reading or picture naming.
Collapse
Affiliation(s)
- Josh Neudorf
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Layla Gould
- Division of Neurosurgery, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Marla J S Mickleborough
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Chelsea Ekstrand
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Ron Borowsky
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada; Division of Neurosurgery, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada.
| |
Collapse
|
12
|
Comprehension exposures to words in sentence contexts impact spoken word production. Mem Cognit 2021; 50:192-215. [PMID: 34453287 DOI: 10.3758/s13421-021-01214-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2021] [Indexed: 11/08/2022]
Abstract
Comprehension or production of isolated words and production of words embedded in sentence contexts facilitated later production in previous research. The present study examined the extent to which contextualized comprehension exposures would impact later production. Two repetition priming experiments were conducted with Spanish-English bilingual participants. In Experiment 1 (N = 112), all encoding stimuli were presented visually, and in Experiment 2 (N = 112), all encoding stimuli were presented auditorily. After reading/listening or translating isolated words or words embedded in sentences at encoding, pictures corresponding to each target word were named aloud. Repetition priming relative to new items was measured in RT and accuracy. Relative to isolated encoding, sentence encoding reduced RT priming but not accuracy priming. In reading/listening encoding conditions, both isolated and embedded words elicited accuracy priming in picture naming, but only isolated words elicited RT priming. In translation encoding conditions, repetition priming effects in RT (but not accuracy) were stronger for lower-frequency words and with lower proficiency in the picture-naming response language. RT priming was strongest when the translation response at encoding was produced in the same language as final picture naming. In contrast, accuracy priming was strongest when the translation stimulus at encoding was comprehended in the same language as final picture naming. Thus, comprehension at encoding increased the rate of successful retrieval, whereas production at encoding speeded later production. Practice of comprehension may serve to gradually move less well-learned words from receptive to productive vocabulary.
Collapse
|
13
|
Ivanova AA, Mineroff Z, Zimmerer V, Kanwisher N, Varley R, Fedorenko E. The Language Network Is Recruited but Not Required for Nonverbal Event Semantics. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:176-201. [PMID: 37216147 PMCID: PMC10158592 DOI: 10.1162/nol_a_00030] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/07/2021] [Indexed: 05/24/2023]
Abstract
The ability to combine individual concepts of objects, properties, and actions into complex representations of the world is often associated with language. Yet combinatorial event-level representations can also be constructed from nonverbal input, such as visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task versus a difficult perceptual control task on sentences and line drawings that describe/depict simple agent-patient interactions. We found that the language network responded robustly during the semantic task performed on both sentences and pictures (although its response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on the task of matching sentences to pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent-patient interactions. Overall, our results indicate that the left frontotemporal language network is recruited but not necessary for semantic processing of nonverbally presented events.
Collapse
Affiliation(s)
- Anna A. Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zachary Mineroff
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Vitor Zimmerer
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosemary Varley
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
14
|
Dudschig C, Kaup B. Pictorial vs. linguistic negation: Investigating negation in imperatives across different symbol domains. Acta Psychol (Amst) 2021; 214:103266. [PMID: 33609971 DOI: 10.1016/j.actpsy.2021.103266] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 01/06/2021] [Accepted: 01/18/2021] [Indexed: 11/30/2022] Open
Abstract
The processing of negation is typically regarded as one of the most demanding cognitive processes as it often involves the reversal of input information. As negation is also regarded as a core linguistic process, to date, investigations of negation have typically been linguistic in nature. However, negation is a standard operator also within non-linguistic domains. For example, traffic signs often use negation to indicate a prohibition of specific actions (e.g., no left turn). In the current study, we investigate whether processing difficulties that are typically reported within the linguistic domain generalize to pictorial negation. Across two experiments, linguistic negation and pictorial negation were directly compared to their affirmative counterparts. In line with the literature, the results show that there is a general processing benefit for pictorial input. Most interestingly, the core process of negation also benefits from the pictorial input. Specifically, the processing difficulty in pictorial negation compared to affirmation is less pronounced than within the linguistic domain, especially concerning error rates. In the current experiments, pictorial negation did not result in increased error rates compared to the affirmative condition. Overall, the current results suggest that negation in pictorial conditions also results in a slowing of information processing. However, the use of pictorial negation can ease processing difficulty over linguistic negation.
Collapse
|
15
|
Farshchi M, Kiba A, Sawada T. Seeing our 3D world while only viewing contour-drawings. PLoS One 2021; 16:e0242581. [PMID: 33481778 PMCID: PMC7822326 DOI: 10.1371/journal.pone.0242581] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 11/04/2020] [Indexed: 11/19/2022] Open
Abstract
Artists can represent a 3D object by using only contours in a 2D drawing. Prior studies have shown that people can use such drawings to perceive 3D shapes reliably, but it is not clear how useful this kind of contour information actually is in a real dynamical scene in which people interact with objects. To address this issue, we developed an Augmented Reality (AR) device that can show a participant a contour-drawing or a grayscale-image of a real dynamical scene in an immersive manner. We compared the performance of people in a variety of run-of-the-mill tasks with both contour-drawings and grayscale-images under natural viewing conditions in three behavioral experiments. The results of these experiments showed that the people could perform almost equally well with both types of images. This contour information may be sufficient to provide the basis for our visual system to obtain much of the 3D information needed for successful visuomotor interactions in our everyday life.
Collapse
Affiliation(s)
- Maddex Farshchi
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Alexandra Kiba
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Tadamasa Sawada
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
16
|
Nieznański M. Levels-of-processing effects on context and target recollection for words and pictures. Acta Psychol (Amst) 2020; 209:103127. [PMID: 32603912 DOI: 10.1016/j.actpsy.2020.103127] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 04/09/2020] [Accepted: 06/17/2020] [Indexed: 10/24/2022] Open
Abstract
The effects of levels of processing (LoP) on memory performance have been extensively studied in cognitive psychology for about half a century. The initial observation of superior memory for words studied under a semantic orienting task rather than a perceptual orienting task elicited a theoretical debate about the underlying mechanisms of this effect. Next, research on LoP effects was extended to pictorial stimuli and connected with analyses of recollection and familiarity processes of recognition memory. The main aim of the current study was to explore the effects of LoP on two distinct components of recollection memory: context recollection, and target recollection-processes recently differentiated in dual-recollection theory. Verbal and pictorial materials were used in several experiments and the participants were asked to remember the study context defined by the kind of orienting task performed. LoP effects were confirmed for context and target recollection when words were used as stimuli. However, reversed LoP effects for context recollection were found in experiments using pictures as the to-be-remembered material. The function of the distinctiveness of pictorial material and the role of the effortfulness of cognitive operations for recollection were analysed and discussed from the perspective of the sensory-semantic model and the source monitoring framework.
Collapse
|
17
|
Identifying task-relevant spectral signatures of perceptual categorization in the human cortex. Sci Rep 2020; 10:7870. [PMID: 32398733 PMCID: PMC7217881 DOI: 10.1038/s41598-020-64243-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 03/11/2020] [Indexed: 11/26/2022] Open
Abstract
Human brain has developed mechanisms to efficiently decode sensory information according to perceptual categories of high prevalence in the environment, such as faces, symbols, objects. Neural activity produced within localized brain networks has been associated with the process that integrates both sensory bottom-up and cognitive top-down information processing. Yet, how specifically the different types and components of neural responses reflect the local networks’ selectivity for categorical information processing is still unknown. In this work we train Random Forest classification models to decode eight perceptual categories from broad spectrum of human intracranial signals (4–150 Hz, 100 subjects) obtained during a visual perception task. We then analyze which of the spectral features the algorithm deemed relevant to the perceptual decoding and gain the insights into which parts of the recorded activity are actually characteristic of the visual categorization process in the human brain. We show that network selectivity for a single or multiple categories in sensory and non-sensory cortices is related to specific patterns of power increases and decreases in both low (4–50 Hz) and high (50–150 Hz) frequency bands. By focusing on task-relevant neural activity and separating it into dissociated anatomical and spectrotemporal groups we uncover spectral signatures that characterize neural mechanisms of visual category perception in human brain that have not yet been reported in the literature.
Collapse
|
18
|
Bruno NM, Embon I, Díaz Rivera MN, Giménez L, D'Amelio TA, Torres Batán S, Guarracino JF, Iorio AA, Andreau JM. Faster might not be better: Pictures may not elicit a stronger unconscious priming effect than words when modulated by semantic similarity. Conscious Cogn 2020; 81:102932. [PMID: 32298956 DOI: 10.1016/j.concog.2020.102932] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 04/05/2020] [Accepted: 04/06/2020] [Indexed: 01/10/2023]
Abstract
It has been suggested that unconscious semantic processing is stimulus-dependent, and that pictures might have privileged access to semantic content. Those findings led to the hypothesis that unconscious semantic priming effect for pictorial stimuli would be stronger as compared to verbal stimuli. This effect was tested on pictures and words by manipulating the semantic similarity between the prime and target stimuli. Participants performed a masked priming categorization task for either words or pictures with three semantic similarity conditions: strongly similar, weakly similar, and non-similar. Significant differences in reaction times were only found between strongly similar and non-similar and between weakly similar and non-similar, for both pictures and words, with faster overall responses for pictures as compared to words. Nevertheless, pictures showed no superior priming effect over words. This could suggest the hypothesis that even though semantic processing is faster for pictures, this does not imply a stronger unconscious priming effect.
Collapse
Affiliation(s)
- Nicolás Marcelo Bruno
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina; Universidad del Salvador, Facultad de Psicología y Psicopedagogía, Buenos Aires, Argentina; Instituto de Biología y Medicina Experimental, Laboratorio de Biología del Comportamiento, Buenos Aires, Argentina.
| | - Iair Embon
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina
| | | | - Leandro Giménez
- Universidad de Buenos Aires, Facultad de Filosofía y Letras, Buenos Aires, Argentina
| | - Tomás Ariel D'Amelio
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina; Universidad del Salvador, Facultad de Psicología y Psicopedagogía, Buenos Aires, Argentina; Instituto de Biología y Medicina Experimental, Laboratorio de Biología del Comportamiento, Buenos Aires, Argentina
| | - Santiago Torres Batán
- Universidad del Salvador, Facultad de Psicología y Psicopedagogía, Buenos Aires, Argentina; Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Buenos Aires, Argentina
| | | | - Alberto Andrés Iorio
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina; Universidad del Salvador, Facultad de Psicología y Psicopedagogía, Buenos Aires, Argentina; Instituto de Biología y Medicina Experimental, Laboratorio de Biología del Comportamiento, Buenos Aires, Argentina
| | - Jorge Mario Andreau
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina; Universidad del Salvador, Facultad de Psicología y Psicopedagogía, Buenos Aires, Argentina; Instituto de Biología y Medicina Experimental, Laboratorio de Biología del Comportamiento, Buenos Aires, Argentina.
| |
Collapse
|
19
|
Treviño M, Turkbey B, Wood BJ, Pinto PA, Czarniecki M, Choyke PL, Horowitz TS. Rapid perceptual processing in two- and three-dimensional prostate images. J Med Imaging (Bellingham) 2020; 7:022406. [PMID: 31930156 DOI: 10.1117/1.jmi.7.2.022406] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 12/05/2019] [Indexed: 01/17/2023] Open
Abstract
Radiologists can identify whether a radiograph is abnormal or normal at above chance levels in breast and lung images presented for half a second or less. This early perceptual processing has only been demonstrated in static two-dimensional images (e.g., mammograms). Can radiologists rapidly extract the "gestalt" from more complex imaging modalities? For example, prostate multiparametric magnetic resonance imaging (mpMRI) displays a series of images as a virtual stack and comprises multiple imaging sequences: anatomical information from the T2-weighted (T2W) sequence, functional information from diffusion-weighted imaging, and apparent diffusion coefficient sequences. We first tested rapid perceptual processing in static T2W images then among the two functional sequences. Finally, we examined whether this rapid radiological perception could be observed using T2W multislice imaging. Readers with experience in prostate mpMRI could detect and localize lesions in all sequences after viewing a 500-ms static image. Experienced prostate readers could also detect and localize lesions when viewing multislice image stacks presented as brief movies, with image slices presented at either 48, 96, or 144 ms. The ability to quickly extract the perceptual gestalt may be a general property of expert perception, even in complex imaging modalities.
Collapse
Affiliation(s)
- Melissa Treviño
- National Cancer Institute, Basic Biobehavioral and Psychological Sciences Branch, Rockville, Maryland, United States
| | - Baris Turkbey
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Bradford J Wood
- National Cancer Institute, Center for Interventional Oncology, Bethesda, Maryland, United States
| | - Peter A Pinto
- National Cancer Institute, Urologic Oncology Branch, Bethesda, Maryland, United States
| | - Marcin Czarniecki
- Georgetown University School of Medicine, Washington, DC, United States
| | - Peter L Choyke
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Todd S Horowitz
- National Cancer Institute, Basic Biobehavioral and Psychological Sciences Branch, Rockville, Maryland, United States
| |
Collapse
|
20
|
Lang F, Schmidt A, Machulla T. Augmented Reality for People with Low Vision: Symbolic and Alphanumeric Representation of Information. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7479791 DOI: 10.1007/978-3-030-58796-3_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Many individuals with visual impairments have residual vision that often remains underused by assistive technologies. Head-mounted augmented reality (AR) devices can provide assistance, by recoding difficult-to-perceive information into a visual format that is more accessible. Here, we evaluate symbolic and alphanumeric information representations for their efficiency and usability in two prototypical AR applications: namely, recognizing facial expressions of conversational partners and reading the time. We find that while AR provides a general benefit, the complexity of the visual representations has to be matched to the user’s visual acuity.
Collapse
|
21
|
Tsotsos JK, Kotseruba I, Wloka C. Rapid visual categorization is not guided by early salience-based selection. PLoS One 2019; 14:e0224306. [PMID: 31648265 PMCID: PMC6812801 DOI: 10.1371/journal.pone.0224306] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 10/11/2019] [Indexed: 11/19/2022] Open
Abstract
The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.
Collapse
Affiliation(s)
- John K. Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - Iuliia Kotseruba
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - Calden Wloka
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| |
Collapse
|
22
|
de Almeida RG, Di Nardo J, Antal C, von Grünau MW. Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes. Front Psychol 2019; 10:2162. [PMID: 31649574 PMCID: PMC6795699 DOI: 10.3389/fpsyg.2019.02162] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 09/09/2019] [Indexed: 11/13/2022] Open
Abstract
As Macnamara (1978) once asked, how can we talk about what we see? We report on a study manipulating realistic dynamic scenes and sentences aiming to understand the interaction between linguistic and visual representations in real-world situations. Specifically, we monitored participants' eye movements as they watched video clips of everyday scenes while listening to sentences describing these scenes. We manipulated two main variables. The first was the semantic class of the verb in the sentence and the second was the action/motion of the agent in the unfolding event. The sentences employed two verb classes-causatives (e.g., break) and perception/psychological (e.g., notice)-which impose different constraints on the nouns that serve as their grammatical complements. The scenes depicted events in which agents either moved toward a target object (always the referent of the verb-complement noun), away from it, or remained neutral performing a given activity (such as cooking). Scenes and sentences were synchronized such that the verb onset corresponded to the first video frame of the agent motion toward or away from the object. Results show effects of agent motion but weak verb-semantic restrictions: causatives draw more attention to potential referents of their grammatical complements than perception verbs only when the agent moves toward the target object. Crucially, we found no anticipatory verb-driven eye movements toward the target object, contrary to studies using non-naturalistic and static scenes. We propose a model in which linguistic and visual computations in real-world situations occur largely independent of each other during the early moments of perceptual input, but rapidly interact at a central, conceptual system using a common, propositional code. Implications for language use in real world contexts are discussed.
Collapse
Affiliation(s)
| | - Julia Di Nardo
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Caitlyn Antal
- Department of Psychology, Concordia University, Montreal, QC, Canada.,Department of Linguistics, Yale University, New Haven, CT, United States
| | | |
Collapse
|
23
|
Predicting (variability of) context effects in language comprehension. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2019. [DOI: 10.1007/s41809-019-00025-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
24
|
Do target detection and target localization always go together? Extracting information from briefly presented displays. Atten Percept Psychophys 2019; 81:2685-2699. [PMID: 31218599 DOI: 10.3758/s13414-019-01782-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
The human visual system is capable of processing an enormous amount of information in a short time. Although rapid target detection has been explored extensively, less is known about target localization. Here we used natural scenes and explored the relationship between being able to detect a target (present vs. absent) and being able to localize it. Across four presentation durations (~ 33-199 ms), participants viewed scenes taken from two superordinate categories (natural and manmade), each containing exemplars from four basic scene categories. In a two-interval forced choice task, observers were asked to detect a Gabor target inserted in one of the two scenes. This was followed by one of two different localization tasks. Participants were asked either to discriminate whether the target was on the left or the right side of the display or to click on the exact location where they had seen the target. Targets could be detected and localized at our shortest exposure duration (~ 33 ms), with a predictable improvement in performance with increasing exposure duration. We saw some evidence at this shortest duration of detection without localization, but further analyses demonstrated that these trials typically reflected coarse or imprecise localization information, rather than its complete absence. Experiment 2 replicated our main findings while exploring the effect of the level of "openness" in the scene. Our results are consistent with the notion that when we are able to extract what objects are present in a scene, we also have information about where each object is, which provides crucial guidance for our goal-directed actions.
Collapse
|
25
|
Evans KK, Culpan AM, Wolfe JM. Detecting the "gist" of breast cancer in mammograms three years before localized signs of cancer are visible. Br J Radiol 2019; 92:20190136. [PMID: 31166769 DOI: 10.1259/bjr.20190136] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES After a 500 ms presentation, experts can distinguish abnormal mammograms at above chance levels even when only the breast contralateral to the lesion is shown. Here, we show that this signal of abnormality is detectable 3 years before localized signs of cancer become visible. METHODS In 4 prospective studies, 59 expert observers from 3 groups viewed 116-200 bilateral mammograms for 500 ms each. Half of the images were prior exams acquired 3 years prior to onset of visible, actionable cancer and half were normal. Exp. 1D included cases having visible abnormalities. Observers rated likelihood of abnormality on a 0-100 scale and categorized breast density. Performance was measured using receiver operating characteristic analysis. RESULTS In all three groups, observers could detect abnormal images at above chance levels 3 years prior to visible signs of breast cancer (p < 0.001). The results were not due to specific salient cases nor to breast density. Performance was correlated with expertise quantified by the number of mammographic cases read within a year. In Exp. 1D, with cases having visible actionable pathology included, the full group of readers failed to reliably detect abnormal priors; with the exception of a subgroup of the six most experienced observers. CONCLUSIONS Imaging specialists can detect signals of abnormality in mammograms acquired years before lesions become visible. Detection may depend on expertise acquired by reading large numbers of cases. ADVANCES IN KNOWLEDGE Global gist signal can serve as imaging risk factor with the potential to identify patients with elevated risk for developing cancer, resulting in improved early cancer diagnosis rates and improved prognosis for females with breast cancer.
Collapse
Affiliation(s)
- Karla K Evans
- 1 Psychology Department, University of York , York , United Kingdom
| | | | - Jeremy M Wolfe
- 3 Harvard Medical School and Brigham and Women's Hospital , Boston , MA, USA
| |
Collapse
|
26
|
Abstract
The picture-superiority effect (PSE) refers to the finding that, all else being equal, pictures are remembered better than words ( Paivio & Csapo, 1973 ). Dual-coding theory (DCT; Paivio, 1991 ) is often used to explain the PSE. According to DCT, pictures are more likely to be encoded imaginally and verbally than words. In contrast, distinctiveness accounts attribute the PSE to pictures' greater distinctiveness compared to words. Some distinctiveness accounts emphasize physical distinctiveness ( Mintzer & Snodgrass, 1999 ) while others emphasize conceptual distinctiveness ( Hamilton & Geraci, 2006 ). We attempt to distinguish among these accounts by testing for an auditory analog of picture superiority. Although this phenomenon, termed the auditory PSE, occurs in free recall ( Crutcher & Beer, 2011 ), we were unable to extend it to recognition across four experiments. We propose a new framework for understanding the PSE, wherein dual coding underpins the free-recall PSE, but conceptual distinctiveness underpins the recognition PSE.
Collapse
Affiliation(s)
- Tyler M Ensor
- 1 Department of Psychology, Wilfrid Laurier University, Waterloo, ONT, Canada.,2 Department of Psychology, Memorial University of Newfoundland, St John's, Canada
| | - Tyler D Bancroft
- 1 Department of Psychology, Wilfrid Laurier University, Waterloo, ONT, Canada.,3 St. Thomas University, Fredericton, NB, Canada
| | - William E Hockley
- 1 Department of Psychology, Wilfrid Laurier University, Waterloo, ONT, Canada
| |
Collapse
|
27
|
Schendan HE. Memory influences visual cognition across multiple functional states of interactive cortical dynamics. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
28
|
Gil-Pérez I, Rebollar R, Lidón I, Piqueras-Fiszman B, van Trijp HC. What do you mean by hot? Assessing the associations raised by the visual depiction of an image of fire on food packaging. Food Qual Prefer 2019. [DOI: 10.1016/j.foodqual.2018.08.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
29
|
Brady TF, Störmer VS, Shafer-Skelton A, Williams JR, Chapman AF, Schill HM. Scaling up visual attention and visual working memory to the real world. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
30
|
Francis WS, Taylor RS, Gutiérrez M, Liaño MK, Manzanera DG, Penalver RM. The effects of bilingual language proficiency on recall accuracy and semantic clustering in free recall output: evidence for shared semantic associations across languages. Memory 2018; 26:1364-1378. [PMID: 29781375 PMCID: PMC6179441 DOI: 10.1080/09658211.2018.1476551] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Two experiments investigated how well bilinguals utilise long-standing semantic associations to encode and retrieve semantic clusters in verbal episodic memory. In Experiment 1, Spanish-English bilinguals (N = 128) studied and recalled word and picture sets. Word recall was equivalent in L1 and L2, picture recall was better in L1 than in L2, and the picture superiority effect was stronger in L1 than in L2. Semantic clustering in word and picture recall was equivalent in L1 and L2. In Experiment 2, Spanish-English bilinguals (N = 128) and English-speaking monolinguals (N = 128) studied and recalled word sequences that contained semantically related pairs. Data were analyzed using a multinomial processing tree approach, the pair-clustering model. Cluster formation was more likely for semantically organised than for randomly ordered word sequences. Probabilities of cluster formation, cluster retrieval, and retrieval of unclustered items did not differ across languages or language groups. Language proficiency has little if any impact on the utilisation of long-standing semantic associations, which are language-general.
Collapse
Affiliation(s)
- Wendy S Francis
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| | - Randolph S Taylor
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| | - Marisela Gutiérrez
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| | - Mary K Liaño
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| | - Diana G Manzanera
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| | - Renee M Penalver
- a Department of Psychology , University of Texas at El Paso , El Paso , TX , USA
| |
Collapse
|
31
|
Meaningful inhibition: Exploring the role of meaning and modality in response inhibition. Neuroimage 2018; 181:108-119. [DOI: 10.1016/j.neuroimage.2018.06.074] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 05/29/2018] [Accepted: 06/27/2018] [Indexed: 01/22/2023] Open
|
32
|
Does "a picture is worth 1000 words" apply to iconic Chinese words? Relationship of Chinese words and pictures. Sci Rep 2018; 8:8289. [PMID: 29844332 PMCID: PMC5974396 DOI: 10.1038/s41598-018-25885-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 04/26/2018] [Indexed: 11/09/2022] Open
Abstract
The meaning of a picture can be extracted rapidly, but the form-to-meaning relationship is less obvious for printed words. In contrast to English words that follow grapheme-to-phoneme correspondence rule, the iconic nature of Chinese words might predispose them to activate their semantic representations more directly from their orthographies. By using the paradigm of repetition blindness (RB) that taps into the early level of word processing, we examined whether Chinese words activate their semantic representations as directly as pictures do. RB refers to the failure to detect the second occurrence of an item when it is presented twice in temporal proximity. Previous studies showed RB for semantically related pictures, suggesting that pictures activate their semantic representations directly from their shapes and thus two semantically related pictures are represented as repeated. However, this does not apply to English words since no RB was found for English synonyms. In this study, we replicated the semantic RB effect for pictures, and further showed the absence of semantic RB for Chinese synonyms. Based on our findings, it is suggested that Chinese words are processed like English words, which do not activate their semantic representations as directly as pictures do.
Collapse
|
33
|
Abstract
Two principal types of account of repetition priming postulate either facilitation of activation of perceptual representations used in stimulus recognition, or retrieval of specific processing episodes as possible mechanisms by which the effect occurs; these make different predictions concerning the priming of two stimuli presented simultaneously. In Experiments 1–3, subjects made same/different decisions about picture-word stimulus pairs. Recombining the pairings of a subset of items between training and test encounters did not significantly reduce the benefit in response time from repetition, as compared to pairs repeated intact. Subjects were able to remember the pairings (Experiment 4), but this did not influence repetition priming. Instead, the memory representations underlying the priming of each item in a pair were independent. No priming was found between pictures seen at training and words at test, and vice versa (Experiment 5), indicating that representations underlying the repetition effect were domain-specific. In Experiment 6, stimuli were all from within the domain of object pictures. Again, recombining the pairings of items between training and test did not significantly reduce the benefit in response time from repetition, as compared to pairs repeated intact. These results reveal an item-specific locus for repetition priming, consistent with priming occurring within pre-semantic perceptual representation systems involved in item recognition. The findings pose problems for theories that argue that repetition effects result only from retrieval of entire processing episodes.
Collapse
Affiliation(s)
- Michael P. Dean
- University of Durham, Durham, U.K. MRC Applied Psychology Unit, Cambridge, U.K
| | | |
Collapse
|
34
|
Hanley JRJ, Pearson NA, Howard LA. The Effects of Different Types of Encoding Task on Memory for Famous Faces and Names. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/14640749008401247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
In this study, incidental memory for familiar faces following different types of encoding task was investigated. Subjects who had been asked to name faces of celebrities at presentation subsequently remembered them significantly better than subjects who had been asked to provide contextual information about the faces, and than subjects who had been asked to distinguish them from unfamiliar faces. This effect persisted regardless of whether the tests required memory for names, faces, or biographical information. It is argued that these results can be explained in terms of the face-processing framework of Bruce and Young (1986) and the theory of episodic memory for faces put forward by Bruce (1982, 1988). However the findings are not consistent with levels of processing (Craik & Lockhart, 1972), nor transfer appropriate processing (Morris, Bransford, & Franks, 1977).
Collapse
|
35
|
Abstract
American and Israeli bilinguals were presented with Hebrew and English words, as well as with drawings of concrete objects. They were timed as they named and classified these items in either their native or non-native languages. The pattern of response times suggests that following reading, semantic processing is conducted in a language-independent mode. Specifically, the patterns exhibited by the two subject groups were almost identical, and neither native-language advantage nor advantage due to a match between language of input and language of output were found.
Collapse
Affiliation(s)
- Benny Shanon
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
36
|
Barry C, Morrison CM, Ellis AW. Naming the Snodgrass and Vanderwart Pictures: Effects of Age of Acquisition, Frequency, and Name Agreement. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/783663595] [Citation(s) in RCA: 140] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Independent measures of age of acquisition (AoA), name agreement, and rated object familiarity were obtained from groups of British subjects for all items in the Snodgrass and Vanderwart (1980) picture set with single names. Word frequency measures, both written and spoken, were taken from the Celex database (Centre for Lexical Information, 1993). The line drawings were presented to a separate group of participants in an object naming task, and vocal naming latencies were recorded. A subset of 195 items was selected for analysis after excluding items with, for example, low name agreement. The major determinants of picture naming speed were the frequency of the name, the interaction between AoA and frequency, and name agreement. (The main effect of the AoA of the name and the effect of the rated image agreement of the picture were also significant on one-tailed tests.) Spoken name frequency affects object naming times mainly for items with later-acquired names.
Collapse
|
37
|
Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior. Atten Percept Psychophys 2017; 79:154-168. [PMID: 27645215 DOI: 10.3758/s13414-016-1203-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.
Collapse
|
38
|
Pancer E, Poole M. The popularity and virality of political social media: hashtags, mentions, and links predict likes and retweets of 2016 U.S. presidential nominees’ tweets. SOCIAL INFLUENCE 2016. [DOI: 10.1080/15534510.2016.1265582] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Ethan Pancer
- Department of Marketing, Sobey School of Business, Saint Mary’s University, Halifax, Canada
| | - Maxwell Poole
- Department of Marketing, Sobey School of Business, Saint Mary’s University, Halifax, Canada
| |
Collapse
|
39
|
Nakashima R, Komori Y, Maeda E, Yoshikawa T, Yokosawa K. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially. Front Psychol 2016; 7:1553. [PMID: 27774080 PMCID: PMC5054019 DOI: 10.3389/fpsyg.2016.01553] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 09/22/2016] [Indexed: 11/13/2022] Open
Abstract
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Collapse
Affiliation(s)
| | - Yuya Komori
- Department of Psychology, The University of TokyoTokyo, Japan
| | - Eriko Maeda
- The University of Tokyo HospitalTokyo, Japan
| | | | | |
Collapse
|
40
|
Endress AD, Siddique A. The cost of proactive interference is constant across presentation conditions. Acta Psychol (Amst) 2016; 170:186-94. [PMID: 27565246 DOI: 10.1016/j.actpsy.2016.08.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Revised: 06/23/2016] [Accepted: 08/01/2016] [Indexed: 11/19/2022] Open
Abstract
Proactive interference (PI) severely constrains how many items people can remember. For example, Endress and Potter (2014a) presented participants with sequences of everyday objects at 250ms/picture, followed by a yes/no recognition test. They manipulated PI by either using new images on every trial in the unique condition (thus minimizing PI among items), or by re-using images from a limited pool for all trials in the repeated condition (thus maximizing PI among items). In the low-PI unique condition, the probability of remembering an item was essentially independent of the number of memory items, showing no clear memory limitations; more traditional working memory-like memory limitations appeared only in the high-PI repeated condition. Here, we ask whether the effects of PI are modulated by the availability of long-term memory (LTM) and verbal resources. Participants viewed sequences of 21 images, followed by a yes/no recognition test. Items were presented either quickly (250ms/image) or sufficiently slowly (1500ms/image) to produce LTM representations, either with or without verbal suppression. Across conditions, participants performed better in the unique than in the repeated condition, and better for slow than for fast presentations. In contrast, verbal suppression impaired performance only with slow presentations. The relative cost of PI was remarkably constant across conditions: relative to the unique condition, performance in the repeated condition was about 15% lower in all conditions. The cost of PI thus seems to be a function of the relative strength or recency of target items and interfering items, but relatively insensitive to other experimental manipulations.
Collapse
|
41
|
Abstract
The existence of a central fovea, the small retinal region with high analytical performance, is arguably the most prominent design feature of the primate visual system. This centralization comes along with the corresponding capability to move the eyes to reposition the fovea continuously. Past research on visual perception was mainly concerned with foveal vision while the observers kept their eyes stationary. Research on the role of eye movements in visual perception emphasized their negative aspects, for example, the active suppression of vision before and during the execution of saccades. But is the only benefit of our precise eye movement system to provide high acuity of the small foveal region, at the cost of retinal blur during their execution? In this review, I will compare human visual perception with and without saccadic and smooth pursuit eye movements to emphasize different aspects and functions of eye movements. I will show that the interaction between eye movements and visual perception is optimized for the active sampling of information across the visual field and for the calibration of different parts of the visual field. The movements of our eyes and visual information uptake are intricately intertwined. The two processes interact to enable an optimal perception of the world, one that we cannot fully grasp by doing experiments where observers are fixating a small spot on a display.
Collapse
|
42
|
Kobayashi S. Theoretical Issues concerning Superiority of Pictures over Words and Sentences in Memory. Percept Mot Skills 2016. [DOI: 10.2466/pms.1986.63.2.783] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
4 recent theories concerning the superiority of pictures in memory were reviewed, that is, the basic assumptions and problems of dual-coding theory, levels-of-processing view, the sensory-semantic model, and propositional theories were discussed and evaluations made. Then problems and further tasks in memory for pictures were discussed and suggestions for theoretical development were offered.
Collapse
|
43
|
A half-second glimpse often lets radiologists identify breast cancer cases even when viewing the mammogram of the opposite breast. Proc Natl Acad Sci U S A 2016; 113:10292-7. [PMID: 27573841 DOI: 10.1073/pnas.1606187113] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Humans are very adept at extracting the "gist" of a scene in a fraction of a second. We have found that radiologists can discriminate normal from abnormal mammograms at above-chance levels after a half-second viewing (d' ∼ 1) but are at chance in localizing the abnormality. This pattern of results suggests that they are detecting a global signal of abnormality. What are the stimulus properties that might support this ability? We investigated the nature of the gist signal in four experiments by asking radiologists to make detection and localization responses about briefly presented mammograms in which the spatial frequency, symmetry, and/or size of the images was manipulated. We show that the signal is stronger in the higher spatial frequencies. Performance does not depend on detection of breaks in the normal symmetry of left and right breasts. Moreover, above-chance classification is possible using images from the normal breast of a patient with overt signs of cancer only in the other breast. Some signal is present in the portions of the parenchyma (breast tissue) that do not contain a lesion or that are in the contralateral breast. This signal does not appear to be a simple assessment of breast density but rather the detection of the abnormal gist may be based on a widely distributed image statistic, learned by experts. The finding that a global signal, related to disease, can be detected in parenchyma that does not contain a lesion has implications for improving breast cancer detection.
Collapse
|
44
|
Conceptual short-term memory (CSTM) supports core claims of Christiansen and Chater. Behav Brain Sci 2016; 39:e88. [PMID: 27561216 DOI: 10.1017/s0140525x15000928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Rapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.
Collapse
|
45
|
Doctor EA, Ahmed R, Ainslee V, Cronje T, Klein D, Knight S. Cognitive Aspects of Bilingualism. Part 2. Internal Representation. SOUTH AFRICAN JOURNAL OF PSYCHOLOGY 2016. [DOI: 10.1177/008124638701700205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this article we review and evaluate different explanations of the structure of the bilingual lexicon and of the internal representation of the two languages. We present evidence for and against shared or separate internal representation, and consider different conceptual models of bilingualism. We propose a theoretical model to explain the manner in which bilinguals process information. The theory extends the information-processing model developed in the context of unilingual performance and this model may be able to explain some of the discrepancies concerning the psychology of bilingualism, which have been reported in the literature.
Collapse
Affiliation(s)
- Estelle A. Doctor
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| | - Rashid Ahmed
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| | - Vanessa Ainslee
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| | - Tessa Cronje
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| | - Denise Klein
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| | - Suzette Knight
- School of Psychology, University of the Witwatersrand, 1 Jan Smuts Avenue, Johannesburg 2001, Republic of South Africa
| |
Collapse
|
46
|
Abstract
Sensitivity to temporal change places fundamental limits on object processing in the visual system. An emerging consensus from the behavioral and neuroimaging literature suggests that temporal resolution differs substantially for stimuli of different complexity and for brain areas at different levels of the cortical hierarchy. Here, we used steady-state visually evoked potentials to directly measure three fundamental parameters that characterize the underlying neural response to text and face images: temporal resolution, peak temporal frequency, and response latency. We presented full-screen images of text or a human face, alternated with a scrambled image, at temporal frequencies between 1 and 12 Hz. These images elicited a robust response at the first harmonic that showed differential tuning, scalp topography, and delay for the text and face images. Face-selective responses were maximal at 4 Hz, but text-selective responses, by contrast, were maximal at 1 Hz. The topography of the text image response was strongly left-lateralized at higher stimulation rates, whereas the response to the face image was slightly right-lateralized but nearly bilateral at all frequencies. Both text and face images elicited steady-state activity at more than one apparent latency; we observed early (141-160 msec) and late (>250 msec) text- and face-selective responses. These differences in temporal tuning profiles are likely to reflect differences in the nature of the computations performed by word- and face-selective cortex. Despite the close proximity of word- and face-selective regions on the cortical surface, our measurements demonstrate substantial differences in the temporal dynamics of word- versus face-selective responses.
Collapse
|
47
|
Abstract
Pictured objects and scenes can be understood in a brief glimpse, but there is a debate about whether they are first encoded at the basic level (e.g., banana), as proposed by Rosch et al. (1976, Cognitive Psychology) , or at a superordinate level (e.g., fruit). The level at which we first categorize an object matters in everyday situations because it determines whether we approach, avoid, or ignore the object. In the present study, we limited stimulus duration in order to explore the earliest level of object understanding. Target objects were presented among five other pictures using RSVP at 80, 53, 27, or 13 ms/picture. On each trial, participants viewed or heard 1 of 28 superordinate names or a corresponding basic-level name of the target. The name appeared before or after the picture sequence. Detection (as d') improved as duration increased but was significantly above chance in all conditions and for all durations. When the name was given before the sequence, d' was higher for the basic than for the superordinate name, showing that specific advance information facilitated visual encoding. In the name-after group, performance on the two category levels did not differ significantly; this suggests that encoding had occurred at the basic level during presentation, allowing the superordinate category to be inferred. We interpret the results as being consistent with the claim that the basic level is usually the entry level for object perception.
Collapse
|
48
|
Bowers JS, Vigliocco G, Stadthagen-Gonzalez H, Vinson D. Distinguishing Language from Thought: Experimental Evidence That Syntax Is Lexically Rather Than Conceptually Represented. Psychol Sci 2016. [DOI: 10.1111/1467-9280.00160] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
It is generally assumed that syntax is represented linguistically rather than conceptually, consistent with the more general view that language and thought are coded separately. This claim is widely defended on logical grounds, but it has received little experimental support. In the present study, we asked Spanish and English speakers to make semantic and syntactic categorizations for pictures and their corresponding names. Consistent with past results, latencies to semantically categorize pictures and words were similar. The new finding is that participants were faster to make syntactic decisions for words compared with pictures, suggesting that syntactic features such as grammatical gender and the count-mass distinction are more closely linked to lexical than conceptual representations.
Collapse
|
49
|
Chrysikou EG, Motyka K, Nigro C, Yang SI, Thompson-Schill SL. Functional Fixedness in Creative Thinking Tasks Depends on Stimulus Modality. ACTA ACUST UNITED AC 2016; 10:425-435. [PMID: 28344724 DOI: 10.1037/aca0000050] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pictorial examples during creative thinking tasks can lead participants to fixate on these examples and reproduce their elements even when yielding suboptimal creative products. Semantic memory research may illuminate the cognitive processes underlying this effect. Here, we examined whether pictures and words differentially influence access to semantic knowledge for object concepts depending on whether the task is close- or open-ended. Participants viewed either names or pictures of everyday objects, or a combination of the two, and generated common, secondary, or ad hoc uses for them. Stimulus modality effects were assessed quantitatively through reaction times and qualitatively through a novel coding system, which classifies creative output on a continuum from top-down-driven to bottom-up-driven responses. Both analyses revealed differences across tasks. Importantly, for ad hoc uses, participants exposed to pictures generated more top-down-driven responses than those exposed to object names. These findings have implications for accounts of functional fixedness in creative thinking, as well as theories of semantic memory for object concepts.
Collapse
|
50
|
Taikh A, Hargreaves IS, Yap MJ, Pexman PM. Semantic classification of pictures and words. Q J Exp Psychol (Hove) 2015; 68:1502-18. [DOI: 10.1080/17470218.2014.975728] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Alex Taikh
- Department of Psychology, University of Calgary, Calgary, AB, Canada
| | - Ian S. Hargreaves
- Department of Psychology, University of Calgary, Calgary, AB, Canada
| | - Melvin J. Yap
- Department of Psychology, National University of Singapore, Singapore
| | - Penny M. Pexman
- Department of Psychology, University of Calgary, Calgary, AB, Canada
| |
Collapse
|