1
|
Huang Z, Du Y, Xu F, Hu C. How Does the Horizontal Position of Pictures and Text Affect Product Evaluation? Based on Left and Right Position Effect. Front Psychol 2022; 13:841480. [PMID: 35899009 PMCID: PMC9311378 DOI: 10.3389/fpsyg.2022.841480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 06/01/2022] [Indexed: 11/21/2022] Open
Abstract
Due to the untouchability of online shopping environment, image and text description, as two main ways of product information display, are important indicators for consumers to evaluate products. However, few studies have discussed the synergistic effects of image and text information on consumers. In the present study, in conjunction with the left-right position effect, we examine the expectation that horizontal placement of visual stimuli in different directions has a strong influence on consumers’ product evaluation preferences. This implicit assumption is based on consumers’ unconscious psychological need for closure when processing information. The authors conducted three studies to investigate the relative effects of image information and text statements at different locations in online shopping pages on consumer product evaluations. The results show that: (1) when the evaluation object is a search product, compared with the display mode of left text-right image, the display mode of left image-right text plays a more significant role in consumer product evaluation. The results of experiential products were just the opposite. The way of presenting the text declaration on the left and image on the right has a stronger impact on consumers’ evaluation preference for experiential products (Study 1 and Study 3). (2) The difference in consumers’ evaluation mode of different presentation sequences based on product attributes is driven by their visual information processing fluency (Study 2). These preferences are robust, and it is worth noting that only the order of graphic presentation has no significant influence on consumer product evaluation preference.
Collapse
Affiliation(s)
- Zan Huang
- School of Management, Jinan University, Guangzhou, China
- Research Institute on Brand Innovation and Development of Guangzhou, Guangzhou, China
| | - Yingjue Du
- School of Management, Jinan University, Guangzhou, China
| | - Feifei Xu
- School of Management, Jinan University, Guangzhou, China
| | - Chuming Hu
- School of Management, Jinan University, Guangzhou, China
- *Correspondence: Chuming Hu,
| |
Collapse
|
2
|
Tillman KA, Fukuda E, Barner D. Children gradually construct spatial representations of temporal events. Child Dev 2022; 93:1380-1397. [PMID: 35560030 DOI: 10.1111/cdev.13780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
English-speaking adults often recruit a "mental timeline" to represent events from left-to-right (LR), but its developmental origins are debated. Here, we test whether preschoolers prefer ordered linear representations of events and whether they prefer culturally conventional directions. English-speaking adults (n = 85) and 3- to 5-year-olds (n = 513; 50% female; ~47% white, ~35% Latinx, ~18% other; tested 2016-2018) were told three-step stories and asked to choose which of two image sequences best illustrated them. We found that 3- and 4-year-olds chose ordered over unordered sequences, but preferences between directions did not emerge until at least age 5. Together, these results show that children conceptualize time linearly early in development but gradually acquire directional preferences (e.g., for LR).
Collapse
Affiliation(s)
- Katharine A Tillman
- Department of Psychology, The University of Texas at Austin, Austin, Texas, USA
| | - Eren Fukuda
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - David Barner
- Department of Psychology, University of California, San Diego, San Diego, California, USA.,Department of Linguistics, University of California, San Diego, San Diego, California, USA
| |
Collapse
|
3
|
How head and visual movements affect evaluations of food products. Atten Percept Psychophys 2021; 84:583-598. [PMID: 34881422 DOI: 10.3758/s13414-021-02399-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/25/2021] [Indexed: 11/08/2022]
Abstract
Many studies suggest that specific movements or postures with shared social meaning can influence mainly verbal stimuli evaluation. On the other hand, several visuospatial biases can interact with this influence. Thus, we tested whether both head and stimuli movements can influence individual attitude towards food pictures. In two experiments, we used images of common foods with a weak positive valence in association with two kinds of movements. In Experiment 1, head movement was induced by presenting food pictures with a vertical or horizontal continuous movement on a computer screen. Conversely, Experiment 2 was conducted to test the effects of participants' own head movements with respect to the same food pictures presented in a fixed position. In neither case did head movements influence product evaluation. However, Experiment 1 revealed that the continuous movement left-right-left in the horizontal condition improved the desire to buy and eat, as well as the willingness to pay for the product shown. Two further experiments, the Experiments 3 and 4 demonstrated, respectively, that this effect disappears if the stimulus does not make the return direction, and that it does not depend on the starting or final placement of the images on the screen. These findings are discussed in the context of embodied cognition and visuospatial bias theories.
Collapse
|
4
|
von Hecker U, Arjmandi Lari Z, Fazilat-Pour M, Krumpholtz L. Attribution of feature magnitudes is influenced by trained reading-writing direction. JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1978472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
5
|
Castelain T, Van der Henst JB. The Influence of Language on Spatial Reasoning: Reading Habits Modulate the Formulation of Conclusions and the Integration of Premises. Front Psychol 2021; 12:654266. [PMID: 34079496 PMCID: PMC8165199 DOI: 10.3389/fpsyg.2021.654266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 03/08/2021] [Indexed: 12/05/2022] Open
Abstract
In the present study, we explore how reading habits (e.g., reading from left to right in French or reading from right to left in Arabic) influence the scanning and the construction of mental models in spatial reasoning. For instance, when participants are given a problem like A is to the left of B; B is to the left of C, what is the relation between A and C? They are assumed to construct the model: A B C. If reading habits influence the scanning process, then readers of French should inspect models from left to right, whereas readers of Arabic should inspect them from right to left. The prediction following this analysis is that readers of French should be more inclined to produce "left" conclusions (i.e., A is to the left of C), whereas readers of Arabic should be more inclined to produce "right" conclusions (i.e., C is to the right of A). Furthermore, one may expect that readers of French show a greater ease in constructing models following a left-to-right direction than models following a right-to-left direction, whereas an opposite pattern might be expected for readers of Arabic. We tested these predictions in two experiments involving French and Yemeni participants. Experiment 1 investigated the formulation of conclusions from spatial premises, and Experiment 2, which was based on non-linguistic stimuli, examined the time required to construct mental models from left to right and from right to left. Our results show clear differences between the two groups. As expected, the French sample showed a strong left-to-right bias, but the Yemeni sample did not show the reverse bias. Results are discussed in terms of cultural influences and universal mechanisms.
Collapse
Affiliation(s)
- Thomas Castelain
- Center for Cognitive Sciences, University of Neuchâtel, Neuchâtel, Switzerland
| | - Jean-Baptiste Van der Henst
- Trajectoires Team, Centre de Recherche en Neurosciences de Lyon, CNRS UMR 5292, Inserm UMR-S 1028, Université Lyon 1, Lyon, France
| |
Collapse
|
6
|
von Hecker U, Klauer KC. Are Rank Orders Mentally Represented by Spatial Arrays? Front Psychol 2021; 12:613186. [PMID: 33959068 PMCID: PMC8093380 DOI: 10.3389/fpsyg.2021.613186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 03/25/2021] [Indexed: 11/13/2022] Open
Abstract
The present contribution argues that transitive reasoning, as exemplified in paradigms of linear order construction in mental space, is associated with spatial effects. Starting from robust findings from the early 70s, research so far has widely discussed the symbolic distance effect (SDE). This effect shows that after studying pairs of relations, e.g., "A > B," "B > C," and "D > E," participants are more correct, and faster in correct responding, the wider the "distance" between two elements within the chain A > B > C > D > E. The SDE has often been given spatial interpretations, but alternatively, non-spatial models of the effect are also viable on the empirical basis so far, which means the question about spatial contributions to the construction of analog representations of rank orders is still open. We suggest here that laterality effects can add the necessary additional information to support the idea of spatial processes. We introduce anchoring effects in terms of showing response advantages for congruent versus incongruent pairings of presentation location on a screen on the one hand, and the hypothetical spatial arrangement of the order in mental space, on the other hand. We report pertinent findings and discuss anchoring paradigms with respect to their internal validity as well as their being rooted in basic mechanisms of trained reading/writing direction.
Collapse
|
7
|
Talking with hands: body representation in British Sign Language users. Exp Brain Res 2021; 239:731-744. [PMID: 33392694 DOI: 10.1007/s00221-020-06013-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 12/08/2020] [Indexed: 12/13/2022]
Abstract
Body representation (BR) refers to the mental representation of motor, sensory, emotional and semantic information about the physical body. This cognitive representation is used in our everyday life, continuously, even though most of the time we do not appreciate it consciously. In some cases, BR is vital to be able to communicate. A crucial feature of signed languages (SLs), for instance, is that body parts such as hands are used to communicate. Nevertheless, little is known about BR in SL: is the communicative function of the body overwriting the physical constraints? Here, we explored this question by comparing twelve British Sign Language (BSL) learners to seventeen tango dancers (body expertise but not for communication) and fourteen control subjects (no special body expertise). We administered the Body Esteem Scale (BES), the Hand Laterality Task (HLT) and the Mental Motor Chronometry (MMC). To control for visual imagery, we administered ad hoc control tasks. We did not identify parameters able to differentiate between SL users and the other groups, whereas the more implicit parameters distinguished clearly tango dancers from controls. Importantly, neither tasks on visual imagery nor the BES revealed differences. Our findings offer initial evidence that linguistic use of the body not necessarily influences the cognitive components we explored of body representation.
Collapse
|
8
|
White PA. Body, head, and gaze orientation in portraits: Effects of artistic medium, date of execution, and gender. Laterality 2020; 25:292-324. [DOI: 10.1080/1357650x.2019.1684935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
9
|
Boiteau TW, Smith C, Almor A. Rightward directional bias in art produced by cultures without a written language. Laterality 2020; 25:165-176. [PMID: 31242803 DOI: 10.1080/1357650x.2019.1635613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
In this study, we coded art painted on rocks located in southern Africa, which was painted with a mixture of ochre, blood, and clay by the San, a Neolithic culture with no written language. These images depict a mixture of humans and animals in a variety of contexts, including (but not limited to) hunts and dances. We calculated a laterality index for the collected available art from each region, finding that although there was variability across regions in the direction of the laterality scores, most regions contained a majority of figures facing rightward. This is in stark contrast with reports of artists drawing leftward facing animals and human profiles (an effect that is influenced by native language writing system direction, gender, and handedness), but interestingly our sample also contained regions with strong leftward biases. Our results are, however, in accord with studies that report people preferring images that depict left-to-right motion, as well as the left-to-right bias in depicting transitive actions, an effect that seems to result from greater right hemispheric activation in scene processing and interpretation. Thus, this study shows that in the absence of a writing system, right-lateralized neural architecture may guide the hands of artists.
Collapse
Affiliation(s)
- Timothy W Boiteau
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, US
| | - Cameron Smith
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, US
| | - Amit Almor
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, US.,Linguistics Program, University of South Carolina, Columbia, US
| |
Collapse
|
10
|
Burns P, McCormack T, Jaroslawska AJ, O'Connor PA, Caruso EM. Time Points: A Gestural Study of the Development of Space-Time Mappings. Cogn Sci 2019; 43:e12801. [PMID: 31858631 PMCID: PMC6916177 DOI: 10.1111/cogs.12801] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 11/28/2022]
Abstract
Human languages typically employ a variety of spatial metaphors for time (e.g., “I'm looking forward to the weekend”). The metaphorical grounding of time in space is also evident in gesture. The gestures that are performed when talking about time bolster the view that people sometimes think about regions of time as if they were locations in space. However, almost nothing is known about the development of metaphorical gestures for time, despite keen interest in the origins of space–time metaphors. In this study, we examined the gestures that English‐speaking 6‐to‐7‐year‐olds, 9‐to‐11‐year‐olds, 13‐to‐15‐year‐olds, and adults produced when talking about time. Participants were asked to explain the difference between pairs of temporal adverbs (e.g., “tomorrow” versus “yesterday”) and to use their hands while doing so. There was a gradual increase across age groups in the propensity to produce spatial metaphorical gestures when talking about time. However, even a substantial majority of 6‐to‐7‐year‐old children produced a spatial gesture on at least one occasion. Overall, participants produced fewer gestures in the sagittal (front‐back) axis than in the lateral (left‐right) axis, and this was particularly true for the youngest children and adolescents. Gestures that were incongruent with the prevailing norms of space–time mappings among English speakers (leftward and backward for past; rightward and forward for future) gradually decreased with increasing age. This was true for both the lateral and sagittal axis. This study highlights the importance of metaphoricity in children's understanding of time. It also suggests that, by 6 to 7 years of age, culturally determined representations of time have a strong influence on children's spatial metaphorical gestures.
Collapse
Affiliation(s)
| | | | | | | | - Eugene M Caruso
- Anderson School of Management, University of California, Los Angeles
| |
Collapse
|
11
|
Autry KS, Jordan TM, Girgis H, Falcon RG. The Development of Young Children’s Mental Timeline in Relation to Emergent Literacy Skills. JOURNAL OF COGNITION AND DEVELOPMENT 2019. [DOI: 10.1080/15248372.2019.1664550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
12
|
Where words meet numbers: Comprehension of measurement unit terms in posterior cortical atrophy. Neuropsychologia 2019; 131:216-222. [PMID: 31095931 DOI: 10.1016/j.neuropsychologia.2019.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Revised: 03/30/2019] [Accepted: 05/03/2019] [Indexed: 11/21/2022]
Abstract
Units of measurement (e.g., metre, week, gram) are critically important concepts in everyday life. Little is known about how knowledge of units is represented in the brain or how this relates to other forms of semantic knowledge. As unit terms are intimately connected with numerical quantity, we might expect knowledge for these concepts to be supported by parietally-mediated representations of space, time and magnitude. We investigated knowledge for measurement units in patients with posterior cortical atrophy (PCA), who display profound impairments of spatial and numerical cognition associated with occipital and parietal lobe atrophy. Relative to healthy controls, PCA patients displayed impairments for a range of unit-based knowledge, including the ability to specify the dimension which a unit refers to (e.g., grams measure mass), to select the appropriate units to measure everyday quantities (e.g., grams for sugar) and to determine the relative magnitudes of different unit terms (e.g., gram is smaller than kilogram). In most cases, their performance was also significantly poorer than a patient control group diagnosed with typical Alzheimer's disease. Our results suggest that impairment to systems that code numerical and spatial magnitudes has an effect on non-numerical verbal knowledge for measurement units. Units of measurement appear to lie at the intersection of the brain's verbal and numerical semantic systems, making them a critical class of concepts in which to investigate how magnitude-based codes contribute to verbal semantic representation.
Collapse
|
13
|
Kranjec A, Lehet M, Woods AJ, Chatterjee A. Time Is Not More Abstract Than Space in Sound. Front Psychol 2019; 10:48. [PMID: 30774606 PMCID: PMC6367220 DOI: 10.3389/fpsyg.2019.00048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 01/09/2019] [Indexed: 11/15/2022] Open
Abstract
Time is talked about in terms of space more frequently than the other way around. Some have suggested that this asymmetry runs deeper than language. The idea that we think about abstract domains (like time) in terms of relatively more concrete domains (like space) but not vice versa can be traced to Conceptual Metaphor Theory. This theoretical account has some empirical support. Previous experiments suggest an embodied basis for space-time asymmetries that runs deeper than language. However, these studies frequently involve verbal and/or visual stimuli. Because vision makes a privileged contribution to spatial processing it is unclear whether these results speak to a general asymmetry between time and space based on each domain’s general level of relative abstractness, or reflect modality-specific effects. The present study was motivated by this uncertainty and what appears to be audition’s privileged contribution to temporal processing. In Experiment 1, using an auditory perceptual task, temporal duration and spatial displacement were shown to be mutually contagious. Irrelevant temporal information influenced spatial judgments and vice versa with a larger effect of time on space. Experiment 2 examined the mutual effects of space, time, and pitch. Pitch was investigated because it is a fundamental characteristic of sound perception. It was reasoned that if space is indeed less relevant to audition than time, then spatial distance judgments should be more easily contaminated by variations in auditory frequency, while variations in distance should be less effective in contaminating pitch perception. While time and pitch were shown to be mutually contagious in Experiment 2, irrelevant variation in auditory frequency affected estimates of spatial distance while variations in spatial distance did not affect pitch judgments. Results overall suggest that the perceptual asymmetry between spatial and temporal domains does not necessarily generalize across modalities, and that time is not generally more abstract than space.
Collapse
Affiliation(s)
- Alexander Kranjec
- Department of Psychology, Duquesne University, Pittsburgh, PA, United States.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Matthew Lehet
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States.,Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Adam J Woods
- Cognitive Aging and Memory Clinical Translational Research Program, Institute on Aging, University of Florida, Gainesville, FL, United States.,Department of Aging and Geriatric Research, University of Florida, Gainesville, FL, United States
| | - Anjan Chatterjee
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
14
|
Tillman KA, Tulagan N, Fukuda E, Barner D. The mental timeline is gradually constructed in childhood. Dev Sci 2018; 21:e12679. [PMID: 29749676 DOI: 10.1111/desc.12679] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Accepted: 03/23/2018] [Indexed: 12/01/2022]
Abstract
When reasoning about time, English-speaking adults often invoke a "mental timeline" stretching from left to right. Although the direction of the timeline varies across cultures, the tendency to represent time as a line has been argued to be ubiquitous and primitive. On this hypothesis, we might predict that children also spontaneously invoke a spatial timeline when reasoning about time. However, little is known about how and when the mental timeline develops, or to what extent it is variable and malleable in childhood. Here, we used a sticker placement task to test whether preschoolers and kindergarteners spontaneously map temporal events (breakfast, lunch, and dinner) and deictic time words (yesterday, today, tomorrow) onto lines, and to what degree their representations of time are adult-like. We found that, at age 4, preschoolers were able to arrange temporal items in lines with minimal spatial priming. However, unlike kindergarteners and adults, most preschoolers did not represent time as a line spontaneously, in the absence of priming, and did not prefer left-to-right over right-to-left lines. Furthermore, unlike most adults, children of all ages could be easily primed to adopt an unconventional vertical timeline. Our findings suggest that mappings between time and space in children are initially flexible, and become increasingly automatic and conventionalized in the early school years.
Collapse
Affiliation(s)
- Katharine A Tillman
- Department of Psychology, University of California, San Diego, California, USA.,Department of Psychology, The University of Texas at Austin, Austin, Texas, USA
| | - Nestor Tulagan
- Department of Psychology, University of California, San Diego, California, USA.,School of Education, University of California, Irvine, California, USA
| | - Eren Fukuda
- Department of Psychology, University of California, San Diego, California, USA
| | - David Barner
- Department of Psychology, University of California, San Diego, California, USA.,Department of Linguistics, University of California, San Diego, California, USA
| |
Collapse
|
15
|
Kljajevic V, Vranes-Grujicic M, Raskovic K. Comprehension of Spatial Metaphors After Right Hemisphere Stroke: A Case Report. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2018. [DOI: 10.1515/sjecr-2017-0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Studying how spatial information interacts with figurative language processing in right-hemisphere (RH) stroke patients is a relatively neglected area of research. The goal of the present case study was to establish whether an ischemic lesion in the right temporo-parietal region causing spatial neglect would affect comprehension of sentence-level spatial metaphors, since some evidence indicates the crucial role of the RH in metaphor processing. The patient under study showed some degree of cognitive impairment (e.g., in spatial and verbal working memory, executive control, visuo-spatial matching skills). However, his comprehension of spatial metaphors was preserved. This case illustrates that RH damage does not necessarily affect comprehension of sentence-level spatial metaphors.
Collapse
Affiliation(s)
- Vanja Kljajevic
- University of the Basque Country (UPV/EHU) , Vitoria , Spain
- IKERBASQUE, Basque Foundation for Science , Bilbao , Spain
| | | | | |
Collapse
|
16
|
Quandt LC, Lee YS, Chatterjee A. Neural bases of action abstraction. Biol Psychol 2017; 129:314-323. [PMID: 28964789 DOI: 10.1016/j.biopsycho.2017.09.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Revised: 07/28/2017] [Accepted: 09/26/2017] [Indexed: 01/19/2023]
Abstract
There has been recent debate over whether actions are processed primarily by means of motor simulation or cognitive semantics. The current study investigated how abstract action concepts are processed in the brain, independent of the format in which they are presented. Eighteen healthy adult participants viewed different actions (e.g., diving, boxing) in the form of verbs and schematic action pictograms while functional magnetic resonance imaging (fMRI) was collected. We predicted that sensorimotor and semantic brain regions would show similar patterns of neural activity for different instances of the same action (e.g., diving pictogram and the word 'diving'). A representational similarity analysis revealed posterior temporal and sensorimotor regions where specific action concepts were encoded, independent of the format of presentation. These results reveal the neural instantiations of abstract action concepts, and demonstrate that both sensorimotor and semantic systems are involved in processing actions.
Collapse
Affiliation(s)
- Lorna C Quandt
- Ph.D. in Educational Neuroscience Program, Gallaudet University, 800 Florida Ave NE, Washington, DC 20002, United States.
| | - Yune-Sang Lee
- Department of Speech and Hearing Science, Center for Brain Injury, The Ohio State University, 1070 Carmack Rd., Columbus, OH 43210, United States
| | - Anjan Chatterjee
- Center for Cognitive Neuroscience, Department of Neurology, University of Pennsylvania, 3701 Hamilton Walk, Philadelphia, PA 19104, United States
| |
Collapse
|
17
|
Syntax response-space biases for hands, not feet. Atten Percept Psychophys 2017; 79:989-999. [PMID: 28078554 DOI: 10.3758/s13414-016-1271-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A number of studies have shown a relationship between comprehending transitive sentences and spatial processing (e.g., Chatterjee, Trends in Cognitive Sciences, 5(2), 55-61, 2001), in which there is an advantage for responding to images that depict the agent of an action to the left of the patient. Boiteau and Almor (Cognitive Science, 2016) demonstrated that a similar effect is found for pure linguistic information, such that after reading a sentence, identifying a word that had appeared earlier as the agent is faster on the left than on the right, but only for left-hand responses. In this study, we examined the role of lateralized manual motor processes in this effect and found that such spatial effects occur even when only the responses, but not the stimuli, have a spatial dimension. In support of the specific role of manual motor processes, we found a response-space effect with manual but not with pedal responses. Our results support an effector-specific (as opposed to an effector-general) hypothesis: Manual responses showed spatial effects compatible with those in previous research, whereas pedal responses did not. This is consistent with theoretical and empirical work arguing that the hands are generally involved with, and perhaps more sensitive to, linguistic information.
Collapse
|
18
|
Conder J, Fridriksson J, Baylis GC, Smith CM, Boiteau TW, Almor A. Bilateral parietal contributions to spatial language. BRAIN AND LANGUAGE 2017; 164:16-24. [PMID: 27690125 PMCID: PMC5179296 DOI: 10.1016/j.bandl.2016.09.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Revised: 09/07/2016] [Accepted: 09/12/2016] [Indexed: 06/06/2023]
Abstract
It is commonly held that language is largely lateralized to the left hemisphere in most individuals, whereas spatial processing is associated with right hemisphere regions. In recent years, a number of neuroimaging studies have yielded conflicting results regarding the role of language and spatial processing areas in processing language about space (e.g., Carpenter, Just, Keller, Eddy, & Thulborn, 1999; Damasio et al., 2001). In the present study, we used sparse scanning event-related functional magnetic resonance imaging (fMRI) to investigate the neural correlates of spatial language, that is; language used to communicate the spatial relationship of one object to another. During scanning, participants listened to sentences about object relationships that were either spatial or non-spatial in nature (color or size relationships). Sentences describing spatial relationships elicited more activation in the superior parietal lobule and precuneus bilaterally in comparison to sentences describing size or color relationships. Activation of the precuneus suggests that spatial sentences elicit spatial-mental imagery, while the activation of the SPL suggests sentences containing spatial language involve integration of two distinct sets of information - linguistic and spatial.
Collapse
Affiliation(s)
- Julie Conder
- McMaster University, Department of Psychology, Neuroscience & Behaviour, Canada
| | - Julius Fridriksson
- University of South Carolina, Department of Communication Sciences and Disorders, United States
| | - Gordon C Baylis
- Western Kentucky University, Department of Psychological Sciences, United States
| | - Cameron M Smith
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States
| | - Timothy W Boiteau
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States
| | - Amit Almor
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States; Linguistics Program, University of South Carolina, United States.
| |
Collapse
|
19
|
van Dam WO, Desai RH. Embodied Simulations Are Modulated by Sentential Perspective. Cogn Sci 2016; 41:1613-1628. [PMID: 27859508 DOI: 10.1111/cogs.12449] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 07/13/2016] [Accepted: 07/19/2016] [Indexed: 11/28/2022]
Abstract
There is considerable evidence that language comprehenders derive lexical-semantic meaning by mentally simulating perceptual and motor attributes of described events. However, the nature of these simulations-including the level of detail that is incorporated and contexts under which simulations occur-is not well understood. Here, we examine the effects of first- versus third-person perspective on mental simulations during sentence comprehension. First-person sentences describing physical transfer towards or away from the body (e.g., "You threw the microphone," "You caught the microphone") modulated response latencies when responses were made along a front-back axis, consistent with the action-sentence compatibility effect (ACE). This effect was not observed for third-person sentences ("He threw the microphone," "He caught the microphone"). The ACE was observed when making responses along a left-right axis for third-person, but not first-person sentences. Abstract sentences (e.g., "He heard the message") did not show an ACE along either axis. These results show that perspective is a detail that is simulated during action sentence comprehension, and that motoric activations are flexible and affected by the pronominal perspective used in the sentence.
Collapse
|
20
|
Dobel C, Diesendruck G, Bölte J. How Writing System and Age Influence Spatial Representations of Actions. Psychol Sci 2016; 18:487-91. [PMID: 17576259 DOI: 10.1111/j.1467-9280.2007.01926.x] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Recently, researchers reported a bias for placing agents predominantly on the left side of pictures. Both hemispheric specialization and cultural preferences have been hypothesized to be the origin of this bias. To evaluate these hypotheses, we conducted a study with participants exposed to different reading and writing systems: Germans, who use a left-to-right system, and Israelis, who use a right-to-left system. In addition, we manipulated the degree of exposure to the writing systems by testing preschoolers and adults. Participants heard agent-first or recipient-first sentences and were asked to draw the content of the sentences or to arrange transparencies of protagonists and objects such that their arrangement depicted the sentences. Although preschool-age children in both countries showed no directional bias, adults manifested a bias that was consistent with the writing system of their language. These results support the cultural hypothesis regarding the origin of spatial-representational biases.
Collapse
|
21
|
Boiteau TW, Almor A. Transitivity, Space, and Hand: The Spatial Grounding of Syntax. Cogn Sci 2016; 41:848-891. [DOI: 10.1111/cogs.12355] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 11/27/2015] [Accepted: 12/09/2015] [Indexed: 11/28/2022]
Affiliation(s)
| | - Amit Almor
- Department of Psychology University of South Carolina
- Linguistics Program University of South Carolina
| |
Collapse
|
22
|
Baltaretu A, Krahmer EJ, van Wijk C, Maes A. Talking about Relations: Factors Influencing the Production of Relational Descriptions. Front Psychol 2016; 7:103. [PMID: 26903911 PMCID: PMC4746286 DOI: 10.3389/fpsyg.2016.00103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 01/19/2016] [Indexed: 11/13/2022] Open
Abstract
In a production experiment (Experiment 1) and an acceptability rating one (Experiment 2), we assessed two factors, spatial position and salience, which may influence the production of relational descriptions (such as "the ball between the man and the drawer"). In Experiment 1, speakers were asked to refer unambiguously to a target object (a ball). In Experiment 1a, we addressed the role of spatial position, more specifically if speakers mention the entity positioned leftmost in the scene as (first) relatum. The results showed a small preference to start with the left entity, which leaves room for other factors that could influence spatial reference. Thus, in the following studies, we varied salience systematically, by making one of the relatum candidates animate (Experiment 1b), and by adding attention capture cues, first subliminally by priming one relatum candidate with a flash (Experiment 1c), then explicitly by using salient colors for objects (Experiment 1d). Results indicate that spatial position played a dominant role. Entities on the left were mentioned more often as (first) relatum than those on the right (Experiments 1a-d). Animacy affected reference production in one out of three studies (in Experiment 1d). When salience was manipulated by priming visual attention or by using salient colors, there were no significant effects (Experiments 1c, d). In the acceptability rating study (Experiment 2), participants expressed their preference for specific relata, by ranking descriptions on the basis of how good they thought the descriptions fitted the scene. Results show that participants preferred most the description that had an animate entity as the first mentioned relatum. The relevance of these results for models of reference production is discussed.
Collapse
Affiliation(s)
- Adriana Baltaretu
- Tilburg Center for Cognition and Communication, Tilburg University Tilburg, Netherlands
| | - Emiel J Krahmer
- Tilburg Center for Cognition and Communication, Tilburg University Tilburg, Netherlands
| | - Carel van Wijk
- Tilburg Center for Cognition and Communication, Tilburg University Tilburg, Netherlands
| | - Alfons Maes
- Tilburg Center for Cognition and Communication, Tilburg University Tilburg, Netherlands
| |
Collapse
|
23
|
Quandt LC, Cardillo ER, Kranjec A, Chatterjee A. Fronto-temporal regions encode the manner of motion in spatial language. Neurosci Lett 2015; 609:171-5. [PMID: 26493606 DOI: 10.1016/j.neulet.2015.10.041] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Revised: 09/09/2015] [Accepted: 10/14/2015] [Indexed: 02/01/2023]
Abstract
When describing spatial events, dynamic actions can be decomposed into the path of motion (where the object moves), and the manner of motion (how the object moves). These components may be instantiated in two processing streams in the human brain, wherein dorsal parietal areas process path-related information, while ventral temporal regions process manner information. Previous research showed this pattern during the observation of videos showing animate characters in motion [15]. It is unknown whether reading language describing path and manner information - a level of abstraction beyond the perception of visual motion - relies on similar mechanisms. Here, we use functional neuroimaging to show that the left pMTG processes the manner of motion during reading. We also demonstrate the involvement of other ventral fronto-temporal regions in the understanding of manner of motion in spatial language.
Collapse
Affiliation(s)
- Lorna C Quandt
- Center for Cognitive Neuroscience, University of Pennsylvania, 3710 Hamilton Walk, Philadelphia, PA, USA.
| | - Eileen R Cardillo
- Center for Cognitive Neuroscience, University of Pennsylvania, 3710 Hamilton Walk, Philadelphia, PA, USA
| | - Alexander Kranjec
- Psychology Department, Duquesne University, 211 Rockwell Hall, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, 4400 Fifth Ave., Suite 115, Pittsburgh, PA, USA
| | - Anjan Chatterjee
- Center for Cognitive Neuroscience, University of Pennsylvania, 3710 Hamilton Walk, Philadelphia, PA, USA
| |
Collapse
|
24
|
Walker P. Depicting Visual Motion in Still Images: Forward Leaning and a Left to Right Bias for Lateral Movement. Perception 2015; 44:111-28. [PMID: 26561966 DOI: 10.1068/p7897] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
What artistic conventions are used to convey the motion of animate and inanimate items in still images, such as drawings and photographs? One graphic convention involves depicting items leaning forward into their movement, with greater leaning conveying greater speed. Though this convention could derive from the natural leaning forward of people and animals as they run, it is also applied to depictions of inanimate objects (eg cars and trains). It is proposed that it is this convention that allows the italicization of text to convey notions of motion and speed. Evidence for this is obtained from three sources: the use of italicization on book covers (in book titles); judgments of typeface connotations; and performance measures during the semantic classification of words appearing in italicized and non-italicized fonts. Inspection of the availability of italic fonts in Hebrew indicates an additional artistic convention for conveying motion, based on a fundamental bias, yet to be confirmed, for people to expect to see, or prefer to see, lateral movement (real or implied) in a left to right direction, rather than a right to left direction. Evidence for such a bias is found in photographs of a range of animate and inanimate items archived on Google Images. Whereas a rightward bias is found for photographs of animate and inanimate items in motion (the more so, the faster the motion being conveyed), either no bias or a leftward bias is found for the same items in static pose. Possible origins of a fundamental left to right bias for visual motion, and future lines of research able to evaluate them, are identified.
Collapse
Affiliation(s)
- Peter Walker
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, UK
| |
Collapse
|
25
|
Göksun T, Lehet M, Malykhina K, Chatterjee A. Spontaneous gesture and spatial language: Evidence from focal brain injury. BRAIN AND LANGUAGE 2015; 150:1-13. [PMID: 26283001 PMCID: PMC4663137 DOI: 10.1016/j.bandl.2015.07.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Revised: 07/27/2015] [Accepted: 07/30/2015] [Indexed: 05/26/2023]
Abstract
People often use spontaneous gestures when communicating spatial information. We investigated focal brain-injured individuals to test the hypotheses that (1) naming motion event components of manner-path (represented by verbs-prepositions in English) are impaired selectively, (2) gestures compensate for impaired naming. Patients with left or right hemisphere damage (LHD or RHD) and elderly control participants were asked to describe motion events (e.g., running across) depicted in brief videos. Damage to the left posterior middle frontal gyrus, left inferior frontal gyrus, and left anterior superior temporal gyrus (aSTG) produced impairments in naming paths of motion; lesions to the left caudate and adjacent white matter produced impairments in naming manners of motion. While the frequency of spontaneous gestures were low, lesions to the left aSTG significantly correlated with greater production of path gestures. These suggest that producing prepositions-verbs can be separately impaired and gesture production compensates for naming impairments when damage involves left aSTG.
Collapse
Affiliation(s)
- Tilbe Göksun
- Department of Psychology, Koç University, Turkey.
| | - Matthew Lehet
- Department of Neurology, University of Pennsylvania School of Medicine, United States; Center for Cognitive Neuroscience, University of Pennsylvania, United States; Department of Psychology, Carnegie Mellon University, United States
| | | | - Anjan Chatterjee
- Department of Neurology, University of Pennsylvania School of Medicine, United States; Center for Cognitive Neuroscience, University of Pennsylvania, United States
| |
Collapse
|
26
|
Abstract
In American football, pass interference calls can be difficult to make, especially when the timing of contact between players is ambiguous. American football history contains many examples of controversial pass interference decisions, often with fans, players, and officials interpreting the same event differently. The current study sought to evaluate the influence of experience with concepts important for officiating decisions in American football on the probability (i.e., response criteria) of pass interference calls. We further investigated the extent to which such experience modulates perceptual biases that might influence the interpretation of such events. We hypothesized that observers with less experience with the American football concepts important for pass interference would make progressively more pass interference calls than more experienced observers, even when given an explicit description of the necessary criteria for a pass interference call. In a go/no-go experiment using photographs from American football games, three groups of participants with different levels of experience with American football (Football Naïve, Football Player, and Football Official) made pass interference calls for pictures depicting left-moving and right-moving events. More experience was associated with progressively and significantly fewer pass interference calls [F (2,48) = 10.4, p < 0.001], with Football Naïve participants making the most pass interference calls, and Football Officials the least. In addition, our data replicated a prior finding of spatial biases for interpreting left-moving images more harshly than identical right-moving images, but only in Football Players. These data suggest that experience with the concepts important for making a decision may influence the rate of decision-making, and may also play a role in susceptibility to spatial biases.
Collapse
Affiliation(s)
- Adam J Woods
- Center for Cognitive Aging and Memory, Institute on Aging - Department of Aging and Geriatric Research, University of Florida Gainesville, FL, USA ; Department of Neurology, Center for Cognitive Neuroscience, University of Pennsylvania Philadelphia, PA, USA
| | - Alexander Kranjec
- Department of Psychology, Duquesne University Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University Pittsburgh, PA, USA
| | - Matt Lehet
- Department of Neurology, Center for Cognitive Neuroscience, University of Pennsylvania Philadelphia, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University Pittsburgh, PA, USA
| | - Anjan Chatterjee
- Department of Neurology, Center for Cognitive Neuroscience, University of Pennsylvania Philadelphia, PA, USA
| |
Collapse
|
27
|
Thorne S, Hegarty P, Catmur C. Is the left hemisphere androcentric? Evidence of the learned categorical perception of gender. Laterality 2015; 20:571-84. [PMID: 25739413 PMCID: PMC4566876 DOI: 10.1080/1357650x.2015.1016529] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Accepted: 02/02/2015] [Indexed: 11/22/2022]
Abstract
Effects of language learning on categorical perception have been detected in multiple domains. We extended the methods of these studies to gender and pitted the predictions of androcentrism theory and the spatial agency bias against each other. Androcentrism is the tendency to take men as the default gender and is socialized through language learning. The spatial agency bias is a tendency to imagine men before women in the left-right axis in the direction of one's written language. We examined how gender-ambiguous faces were categorized as female or male when presented in the left visual fields (LVFs) and right visual fields (RVFs) to 42 native speakers of English. When stimuli were presented in the RVF rather than the LVF, participants (1) applied a lower threshold to categorize stimuli as male and (2) categorized clearly male faces as male more quickly. Both findings support androcentrism theory suggesting that the left hemisphere, which is specialized for language, processes face stimuli as male-by-default more readily than the right hemisphere. Neither finding evidences an effect of writing direction predicted by the spatial agency bias on the categorization of gender-ambiguous faces.
Collapse
Affiliation(s)
| | - Peter Hegarty
- School of Psychology, University of Surrey, Guildford, UK
| | | |
Collapse
|
28
|
Troyer M, Curley LB, Miller LE, Saygin AP, Bergen BK. Action verbs are processed differently in metaphorical and literal sentences depending on the semantic match of visual primes. Front Hum Neurosci 2014; 8:982. [PMID: 25538604 PMCID: PMC4255517 DOI: 10.3389/fnhum.2014.00982] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 11/17/2014] [Indexed: 11/13/2022] Open
Abstract
Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a) an intact point-light walker or (b) a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking). We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video) and the verb type (close or distant match) for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less-specific way.
Collapse
Affiliation(s)
- Melissa Troyer
- Department of Cognitive Science, University of California San Diego, La Jolla, CA, USA
| | - Lauren B Curley
- Department of Cognitive Science, University of California San Diego, La Jolla, CA, USA
| | - Luke E Miller
- Department of Cognitive Science, University of California San Diego, La Jolla, CA, USA
| | - Ayse P Saygin
- Department of Cognitive Science, University of California San Diego, La Jolla, CA, USA
| | - Benjamin K Bergen
- Department of Cognitive Science, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
29
|
Kranjec A, Lupyan G, Chatterjee A. Categorical biases in perceiving spatial relations. PLoS One 2014; 9:e98604. [PMID: 24870560 PMCID: PMC4037194 DOI: 10.1371/journal.pone.0098604] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 05/06/2014] [Indexed: 11/25/2022] Open
Abstract
We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a “+” shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., “above” and “left”). In easier-to-name trials, both crosses were rotated 45° to form an “×” shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., “above” or “left”). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.
Collapse
Affiliation(s)
- Alexander Kranjec
- Psychology Department, Duquesne University, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- * E-mail:
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Anjan Chatterjee
- Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
30
|
Watson CE, Cardillo ER, Bromberger B, Chatterjee A. The specificity of action knowledge in sensory and motor systems. Front Psychol 2014; 5:494. [PMID: 24904506 PMCID: PMC4033265 DOI: 10.3389/fpsyg.2014.00494] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 05/06/2014] [Indexed: 11/24/2022] Open
Abstract
Neuroimaging studies have found that sensorimotor systems are engaged when participants observe actions or comprehend action language. However, most of these studies have asked the binary question of whether action concepts are embodied or not, rather than whether sensory and motor areas of the brain contain graded amounts of information during putative action simulations. To address this question, we used repetition suppression (RS) functional magnetic resonance imaging to determine if functionally-localized motor movement and visual motion regions-of-interest (ROI) and two anatomical ROIs (inferior frontal gyrus, IFG; left posterior middle temporal gyrus, pMTG) were sensitive to changes in the exemplar (e.g., two different people "kicking") or representational format (e.g., photograph or schematic drawing of someone "kicking") within pairs of action images. We also investigated whether concrete versus more symbolic depictions of actions (i.e., photographs or schematic drawings) yielded different patterns of activation throughout the brain. We found that during a conceptual task, sensory and motor systems represent actions at different levels of specificity. While the visual motion ROI did not exhibit RS to different exemplars of the same action or to the same action depicted by different formats, the motor movement ROI did. These effects are consistent with "person-specific" action simulations: if the motor system is recruited for action understanding, it does so by activating one's own motor program for an action. We also observed significant repetition enhancement within the IFG ROI to different exemplars or formats of the same action, a result that may indicate additional cognitive processing on these trials. Finally, we found that the recruitment of posterior brain regions by action concepts depends on the format of the input: left lateral occipital cortex and right supramarginal gyrus responded more strongly to symbolic depictions of actions than concrete ones.
Collapse
Affiliation(s)
- Christine E. Watson
- Moss Rehabilitation Research Institute, Einstein Healthcare NetworkElkins Park, PA, USA
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Eileen R. Cardillo
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Bianca Bromberger
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Anjan Chatterjee
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| |
Collapse
|
31
|
Dobel C, Enriquez-Geppert S, Zwitserlood P, Bölte J. Literacy shapes thought: the case of event representation in different cultures. Front Psychol 2014; 5:290. [PMID: 24795665 PMCID: PMC3997043 DOI: 10.3389/fpsyg.2014.00290] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 03/20/2014] [Indexed: 11/16/2022] Open
Abstract
There has been a lively debate whether conceptual representations of actions or scenes follow a left-to-right spatial transient when participants depict such events or scenes. It was even suggested that conceptualizing the agent on the left side represents a universal. We review the current literature with an emphasis on event representation and on cross-cultural studies. While there is quite some evidence for spatial bias for representations of events and scenes in diverse cultures, their extent and direction depend on task demands, one‘s native language, and importantly, on reading and writing direction. Whether transients arise only in subject-verb-object languages, due to their linear sentential position of event participants, is still an open issue. We investigated a group of illiterate speakers of Yucatec Maya, a language with a predominant verb-object-subject structure. They were compared to illiterate native speakers of Spanish. Neither group displayed a spatial transient. Given the current literature, we argue that learning to read and write has a strong impact on representations of actions and scenes. Thus, while it is still under debate whether language shapes thought, there is firm evidence that literacy does.
Collapse
Affiliation(s)
- Christian Dobel
- Institute for Biomagnetism and Biosignalanalysis, Westfälische Wilhelms-Universität Münster Münster, Germany
| | - Stefanie Enriquez-Geppert
- Institute for Biomagnetism and Biosignalanalysis, Westfälische Wilhelms-Universität Münster Münster, Germany ; Department of Experimental Psychology, European Medical School, Carl von Ossietzky University Oldenburg, Germany
| | - Pienie Zwitserlood
- Institute for Psychology, Westfälische Wilhelms-Universität Münster Münster, Germany
| | - Jens Bölte
- Institute for Psychology, Westfälische Wilhelms-Universität Münster Münster, Germany
| |
Collapse
|
32
|
|
33
|
Cohn N, Paczynski M. Prediction, events, and the advantage of agents: the processing of semantic roles in visual narrative. Cogn Psychol 2013; 67:73-97. [PMID: 23959023 DOI: 10.1016/j.cogpsych.2013.07.002] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 07/13/2013] [Accepted: 07/13/2013] [Indexed: 10/26/2022]
Abstract
Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this "Agent advantage" reflects Agents' role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in wordless visual narratives. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent vs. a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events.
Collapse
Affiliation(s)
- Neil Cohn
- Center for Research in Language, University of California, San Diego, La Jolla, CA, USA.
| | | |
Collapse
|
34
|
Göksun T, Lehet M, Malykhina K, Chatterjee A. Naming and gesturing spatial relations: evidence from focal brain-injured individuals. Neuropsychologia 2013; 51:1518-27. [PMID: 23685196 DOI: 10.1016/j.neuropsychologia.2013.05.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Revised: 02/21/2013] [Accepted: 05/07/2013] [Indexed: 11/26/2022]
Abstract
Spatial language helps us to encode relations between objects and organize our thinking. Little is known about the neural instantiations of spatial language. Using voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain injured patients who had damage to left frontal-parietal peri-Sylvian regions would have difficulty in naming spatial relations between objects. We also investigated the relationship between impaired verbalization of spatial relations and spontaneous gesture production. Patients with left or right hemisphere damage and elderly control participants were asked to name static (e.g., an apple on a book) and dynamic (e.g., a pen moves over a box) locative relations depicted in brief video clips. The correct use of prepositions in each task and gestures that represent the spatial relations were coded. Damage to the left posterior middle frontal gyrus, the left inferior frontal gyrus, and the left anterior superior temporal gyrus were related to impairment in naming spatial relations. Production of spatial gestures negatively correlated with naming accuracy, suggesting that gestures might help or compensate for difficulty with lexical access. Additional analyses suggested that left hemisphere patients who had damage to the left posterior middle frontal gyrus and the left inferior frontal gyrus gestured less than expected, if gestures are used to compensate for impairments in retrieving prepositions.
Collapse
Affiliation(s)
- Tilbe Göksun
- Department of Neurology and Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | | | | | | |
Collapse
|
35
|
Kranjec A, Ianni G, Chatterjee A. Schemas reveal spatial relations to a patient with simultanagnosia. Cortex 2013; 49:1983-8. [PMID: 23643246 DOI: 10.1016/j.cortex.2013.03.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2012] [Revised: 12/20/2012] [Accepted: 03/17/2013] [Indexed: 11/17/2022]
Abstract
Maps, graphs, and diagrams use simplified graphic forms, like lines and blobs, to represent basic spatial relations, like boundaries and enclosures. A schema is an iconic representation where perceptual detail has been abstracted away from reality in order to provide a more flexible structure for cognition. Unlike truly symbolic representations of spatial relations (i.e., prepositions) a schema preserves some analog spatial qualities of the relation it stands in for. We tested the efficacy of schemas in facilitating the perception and comprehension of spatial relations in a patient with bilateral occipitoparietal damage and resulting simultanagnosia. Patient E.E. performed six matching tasks involving WORDS (in, on, above, below), photographic PICTURES of objects, and/or SCHEMAS depicting the same spatial relations. E.E. was instructed to match a single spatial relation to a corresponding image from an array of four choices. On the two tasks that did not include matching to or from schemas, E.E. performed at chance levels. On tasks with schemas, performance was significantly better, indicating that schematic representations make spatial relations visible in a manner that symbols and complex images do not. The results provide general insight as to how schemas facilitate spatial reasoning when used in graphic depictions, and how such theoretically intermediate representational structures could serve to link perceptual and verbal representations of spatial relations in the brain.
Collapse
Affiliation(s)
- Alexander Kranjec
- Psychology Department, Duquesne University, PA, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, PA 15282, USA.
| | | | | |
Collapse
|
36
|
Nickerson JV, Corter JE, Tversky B, Rho YJ, Zahner D, Yu L. Cognitive tools shape thought: diagrams in design. Cogn Process 2013; 14:255-72. [DOI: 10.1007/s10339-013-0547-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2012] [Accepted: 01/25/2013] [Indexed: 10/27/2022]
|
37
|
Tranel D, Kemmerer D, Adolphs R, Damasio H, Damasio AR. Neural correlates of conceptual knowledge for actions. Cogn Neuropsychol 2012; 20:409-32. [PMID: 20957578 DOI: 10.1080/02643290244000248] [Citation(s) in RCA: 185] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The neural correlates of conceptual knowledge for actions are not well understood. To begin to address this knowledge gap, we tested the hypothesis that the retrieval of conceptual knowledge for actions depends on neural systems located in higher-order association cortices of left premotor/prefrontal, parietal, and posterior middle temporal regions. The investigation used the lesion method and involved 90 subjects with damage to various regions of the left or right hemisphere. The experimental tasks measured retrieval of knowledge for actions, in a nonverbal format: Subjects evaluated attributes of pictured actions, and compared and matched pictures of actions. In support of our hypothesis, we found that the regions of highest lesion overlap in subjects with impaired retrieval of conceptual knowledge for actions were in the left premotor/prefrontal sector, the left parietal region, and in the white matter underneath the left posterior middle temporal region. These sites are partially distinct from those identified previously as being important for the retrieval of words for actions. We propose that a key function of the sites is to operate as two-way intermediaries between perception and concept retrieval, to promote the retrieval of the multidimensional aspects of knowledge that are necessary and sufficient for the mental representation of a concept of a given action.
Collapse
Affiliation(s)
- Daniel Tranel
- University of Iowa College of Medicine, Iowa City, USA
| | | | | | | | | |
Collapse
|
38
|
Amorapanth P, Kranjec A, Bromberger B, Lehet M, Widick P, Woods AJ, Kimberg DY, Chatterjee A. Language, perception, and the schematic representation of spatial relations. BRAIN AND LANGUAGE 2012; 120:226-236. [PMID: 22070948 PMCID: PMC3299879 DOI: 10.1016/j.bandl.2011.09.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Revised: 09/23/2011] [Accepted: 09/25/2011] [Indexed: 05/31/2023]
Abstract
Schemas are abstract nonverbal representations that parsimoniously depict spatial relations. Despite their ubiquitous use in maps and diagrams, little is known about their neural instantiation. We sought to determine the extent to which schematic representations are neurally distinguished from language on the one hand, and from rich perceptual representations on the other. In patients with either left hemisphere damage or right hemisphere damage, a battery of matching tasks depicting categorical spatial relations was used to probe for the comprehension of basic spatial concepts across distinct representational formats (words, pictures, and schemas). Left hemisphere patients underperformed right hemisphere patients across all tasks. However, focused residual analyses using voxel-based lesion-symptom mapping (VLSM) suggest that (1) left hemisphere deficits in the representation of categorical spatial relations are difficult to distinguish from deficits in naming these relations and (2) the right hemisphere plays a special role in extracting schematic representations from richly textured pictures.
Collapse
Affiliation(s)
- Prin Amorapanth
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Alexander Kranjec
- Psychology Department, Duquesne University, Pittsburgh, USA
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Bianca Bromberger
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Matthew Lehet
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Page Widick
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Adam J. Woods
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Daniel Y. Kimberg
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| | - Anjan Chatterjee
- Neurology Department and the Center for Cognitive Neuroscience, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
39
|
Krumnack A, Bucher L, Nejasmic J, Nebel B, Knauff M. A model for relational reasoning as verbal reasoning. COGN SYST RES 2011. [DOI: 10.1016/j.cogsys.2010.11.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
Kranjec A, Cardillo ER, Schmidt GL, Lehet M, Chatterjee A. Deconstructing events: the neural bases for space, time, and causality. J Cogn Neurosci 2011; 24:1-16. [PMID: 21861674 DOI: 10.1162/jocn_a_00124] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Space, time, and causality provide a natural structure for organizing our experience. These abstract categories allow us to think relationally in the most basic sense; understanding simple events requires one to represent the spatial relations among objects, the relative durations of actions or movements, and the links between causes and effects. The present fMRI study investigates the extent to which the brain distinguishes between these fundamental conceptual domains. Participants performed a 1-back task with three conditions of interest (space, time, and causality). Each condition required comparing relations between events in a simple verbal narrative. Depending on the condition, participants were instructed to either attend to the spatial, temporal, or causal characteristics of events, but between participants each particular event relation appeared in all three conditions. Contrasts compared neural activity during each condition against the remaining two and revealed how thinking about events is deconstructed neurally. Space trials recruited neural areas traditionally associated with visuospatial processing, primarily bilateral frontal and occipitoparietal networks. Causality trials activated areas previously found to underlie causal thinking and thematic role assignment, such as left medial frontal and left middle temporal gyri, respectively. Causality trials also produced activations in SMA, caudate, and cerebellum; cortical and subcortical regions associated with the perception of time at different timescales. The time contrast, however, produced no significant effects. This pattern, indicating negative results for time trials but positive effects for causality trials in areas important for time perception, motivated additional overlap analyses to further probe relations between domains. The results of these analyses suggest a closer correspondence between time and causality than between time and space.
Collapse
Affiliation(s)
- Alexander Kranjec
- Psychology Department, Duquesne University, Pittsburgh, PA 15282, USA.
| | | | | | | | | |
Collapse
|
41
|
Flèche du temps, compétences linguistiques et routines culturelles : une étude de la diversité chez des enfants de 10-11 ans en France et au Maroc. ANNEE PSYCHOLOGIQUE 2011. [DOI: 10.4074/s0003503311002016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
42
|
Troadec B, Zarhbouch B. Flèche du temps, compétences linguistiques et routines culturelles : une étude de la diversité chez des enfants de 10-11 ans en France et au Maroc. ANNEE PSYCHOLOGIQUE 2011. [DOI: 10.3917/anpsy.112.0227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
43
|
Dobel C, Enriquez-Geppert S, Hummert M, Zwitserlood P, Bölte J. Conceptual representation of actions in sign language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2011; 16:392-400. [PMID: 21339342 DOI: 10.1093/deafed/enq070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The idea that knowledge of events entails a universal spatial component, that is conceiving agents left of patients, was put to test by investigating native users of German sign language and native users of spoken German. Participants heard or saw event descriptions and had to illustrate the meaning of these events by means of drawing or arranging toys. Two types of verbs were tested, differing in the way they are signed. Verbs with a horizontal transient are typically signed with a left-to-right directionality, from the addressee's point of view. In contrast, verbs with sagittal transients display transitions moving toward or away from speaker. Signers showed a direct mapping preference for verbs with horizontal transients, by putting agents at the same position in space as in the signed message (i.e., mirroring signing space). No such effect was found for verbs with sagittal transients. In all, the data fit with the idea that interpretations of signed or spoken languages are modulated by task and culture as well as language-related factors and constraints.
Collapse
Affiliation(s)
- Christian Dobel
- Institute for Biomagnetism und Biosignalanalysis, Westfälische Wilhelms-Universität Münster, Malmedyweg 15, 48149 Münster, Germany.
| | | | | | | | | |
Collapse
|
44
|
Kazandjian S, Gaash E, Love IY, Zivotofsky AZ, Chokron S. Spatial Representation of Action Phrases Among Bidirectional Readers. SOCIAL PSYCHOLOGY 2011. [DOI: 10.1027/1864-9335/a000069] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Perceptual bias in simple visuospatial tasks, such as line bisection seen among healthy dextrals, has often been attributed to the hemispheric activation hypothesis. The often reported leftward perceptual bias was explained by an activation of the right hemisphere during visuospatial tasks. However, imposed scanning direction and stimuli saliency have also been used to explain these spatial asymmetries. One example of scanning direction is the well-trained one resulting from reading direction. Here, we present studies that target the role of reading direction on nonverbal tasks: line bisection, esthetic preference, and straight-ahead pointing by comparing left-to-right and right-to-left readers. The findings are discussed regarding the interaction between cultural factors, such as reading habits, and biological factors, such as cerebral lateralization.
Collapse
Affiliation(s)
- Seta Kazandjian
- ERT TREAT Vision, Laboratoire de Psychologie et NeuroCognition, UMR 5105 CNRS-Université Pierre Mendès France, France
| | | | | | | | - Sylvie Chokron
- ERT TREAT Vision, Laboratoire de Psychologie et NeuroCognition, UMR 5105 CNRS-Université Pierre Mendès France, France
- Service de Neurologie, Fondation Ophtalmologique Adolphe de Rothschild, France
| |
Collapse
|
45
|
Kranjec A, Chatterjee A. Are temporal concepts embodied? A challenge for cognitive neuroscience. Front Psychol 2010; 1:240. [PMID: 21833293 PMCID: PMC3153844 DOI: 10.3389/fpsyg.2010.00240] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2010] [Accepted: 12/20/2010] [Indexed: 11/29/2022] Open
Abstract
Is time an embodied concept? People often talk and think about temporal concepts in terms of space. This observation, along with linguistic and experimental behavioral data documenting a close conceptual relation between space and time, is often interpreted as evidence that temporal concepts are embodied. However, there is little neural data supporting the idea that our temporal concepts are grounded in sensorimotor representations. This lack of evidence may be because it is still unclear how an embodied concept of time should be expressed in the brain. The present paper sets out to characterize the kinds of evidence that would support or challenge embodied accounts of time. Of main interest are theoretical issues concerning (1) whether space, as a mediating concept for time, is itself best understood as embodied and (2) whether embodied theories should attempt to bypass space by investigating temporal conceptual grounding in neural systems that instantiate time perception.
Collapse
Affiliation(s)
- Alexander Kranjec
- Department of Neurology, Center for Cognitive Neuroscience, University of Pennsylvania Philadelphia, PA, USA
| | | |
Collapse
|
46
|
Khetrapal N. Interactions of space and language: Insights from the neglect syndrome. AUSTRALIAN JOURNAL OF PSYCHOLOGY 2010. [DOI: 10.1080/00049530903567211] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Neha Khetrapal
- Center of Excellence Cognitive Interaction Technology and Department of Psychology and Sport Sciences, University of Bielefeld, Bielefeld, Germany
| |
Collapse
|
47
|
Ulrich R, Maienborn C. Left-right coding of past and future in language: the mental timeline during sentence processing. Cognition 2010; 117:126-38. [PMID: 20850112 DOI: 10.1016/j.cognition.2010.08.001] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2008] [Revised: 07/18/2010] [Accepted: 08/07/2010] [Indexed: 10/19/2022]
Abstract
The metaphoric mapping theory suggests that abstract concepts, like time, are represented in terms of concrete dimensions such as space. This theory receives support from several lines of research ranging from psychophysics to linguistics and cultural studies; especially strong support comes from recent response time studies. These studies have reported congruency effects between the dimensions of time and space indicating that time evokes spatial representations that may facilitate or impede responses to words with a temporal connotation. The present paper reports the results of three linguistic experiments that examined this congruency effect when participants processed past- and future-related sentences. Response time was shorter when past-related sentences required a left-hand response and future-related sentences a right-hand response than when this mapping of time onto response hand was reversed (Experiment 1). This result suggests that participants can form time-space associations during the processing of sentences and thus this result is consistent with the view that time is mentally represented from left to right. The activation of these time-space associations, however, appears to be non-automatic as shown by the results of Experiments 2 and 3 when participants were asked to perform a non-temporal meaning discrimination task.
Collapse
Affiliation(s)
- Rolf Ulrich
- Cognitive and Biological Psychology, University of Tübingen, Friedrichstrasse 21, 72072 Tübingen, Germany.
| | | |
Collapse
|
48
|
Abstract
Depictive expressions of thought predate written language by thousands of years. They have evolved in communities through a kind of informal user testing that has refined them. Analyzing common visual communications reveals consistencies that illuminate how people think as well as guide design; the process can be brought into the laboratory and accelerated. Like language, visual communications abstract and schematize; unlike language, they use properties of the page (e.g., proximity and place: center, horizontal/up-down, vertical/left-right) and the marks on it (e.g., dots, lines, arrows, boxes, blobs, likenesses, symbols) to convey meanings. The visual expressions of these meanings (e.g., individual, category, order, relation, correspondence, continuum, hierarchy) have analogs in language, gesture, and especially in the patterns that are created when people design the world around them, arranging things into piles and rows and hierarchies and arrays, spatial-abstraction-action interconnections termed spractions. The designed world is a diagram.
Collapse
|
49
|
Abstract
Distinguishing between a fair and unfair tackle in soccer can be difficult. For referees, choosing to call a foul often requires a decision despite some level of ambiguity. We were interested in whether a well documented perceptual-motor bias associated with reading direction influenced foul judgments. Prior studies have shown that readers of left-to-right languages tend to think of prototypical events as unfolding concordantly, from left-to-right in space. It follows that events moving from right-to-left should be perceived as atypical and relatively debased. In an experiment using a go/no-go task and photographs taken from real games, participants made more foul calls for pictures depicting left-moving events compared to pictures depicting right-moving events. These data suggest that two referees watching the same play from distinct vantage points may be differentially predisposed to call a foul.
Collapse
|
50
|
Abstract
The idea that concepts are embodied by our motor and sensory systems is popular in current theorizing about cognition. Embodied cognition accounts come in different versions and are often contrasted with a purely symbolic amodal view of cognition. Simulation, or the hypothesis that concepts simulate the sensory and motor experience of real world encounters with instances of those concepts, has been prominent in psychology and cognitive neuroscience. Here, with a focus on spatial thought and language, I review some of the evidence cited in support of simulation versions of embodied cognition accounts. While these data are extremely interesting and many of the experiments are elegant, knowing how to best interpret the results is often far from clear. I point out that a quick acceptance of embodied accounts runs the danger of ignoring alternate hypotheses and not scrutinizing neuroscience data critically. I also review recent work from my lab that raises questions about the nature of sensory motor grounding in spatial thought and language. In my view, the question of whether or not cognition is grounded is more fruitfully replaced by questions about gradations in this grounding. A focus on disembodying cognition, or on graded grounding, opens the way to think about how humans abstract. Within neuroscience, I propose that three functional anatomic axes help frame questions about the graded nature of grounded cognition. First, are questions of laterality differences. Do association cortices in both hemispheres instantiate the same kind of sensory or motor information? Second, are questions about ventral dorsal axes. Do neuronal ensembles along this axis shift from conceptual representations of objects to the relationships between objects? Third, are questions about gradients centripetally from sensory and motor cortices towards and within perisylvian cortices. How does sensory and perceptual information become more language-like and then get transformed into language proper?
Collapse
Affiliation(s)
- Anjan Chatterjee
- Correspondence address: Anjan Chatterjee, Department of Neurology and the Center for Cognitive Neuroscience, University of Pennsylvania, 3 West Gates, 3400 Spruce Street, Philadelphia, PA 19104, USA.
| |
Collapse
|