1
|
Sekine K, Ikuta M. Grammatical structures of emoji in Japanese-language text conversations. Cogn Res Princ Implic 2024; 9:49. [PMID: 39073677 PMCID: PMC11286883 DOI: 10.1186/s41235-024-00571-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 06/16/2024] [Indexed: 07/30/2024] Open
Abstract
Emojis have become a ubiquitous part of everyday text communication worldwide. Cohn et al. (Cognit Res Princ Implic 4(1):1-18, 2019) studied the grammatical structure of emoji usage among English speakers and found a correlation between the sequence of emojis used and English word order, tending towards an subject-verb-object (SVO) sequence. However, it remains unclear whether emoji usage follows a universal grammar or whether it is influenced by native language grammar. Therefore, this study explored the potential influence of Japanese grammar on emoji usage by Japanese speakers. Twenty adults, all native Japanese speakers, participated in pairs. In Experiment 1, participants engaged in conversations through Google Hangouts on iPads. The experiment consisted of four conversation rounds of approximately 8 min each. The first two rounds involved one participant using only written Japanese and the other using only emojis and punctuation, with roles reversed in the second round. The third round required both participants to use only emojis and punctuation. The results indicated that participants preferred subject-object-verb (SOV) or object-verb (OV) sequences, with OV patterns being more common. This pattern reflects a distinctive attribute of Japanese grammatical structure, marked by the frequent omission of the subject. Experiment 2 substituted emojis for words, showing nouns were more commonly replaced than verbs due to the difficulty in conveying complex meanings. Reduced subject replacements again emphasised Japanese grammatical structure. In essence, emoji usage reflects native language structures, but complexities are challenging to convey, resulting in simplified sequences. This study offers insights for enhancing emoji-based communication and interface design, with implications for translation and broader communication.
Collapse
Affiliation(s)
- Kazuki Sekine
- Faculty of Human Sciences, Waseda University, 2-579-15 Mikajima, Tokorozawa, Saitama, 359-1164, Japan.
| | - Manaka Ikuta
- Faculty of Human Sciences, Waseda University, 2-579-15 Mikajima, Tokorozawa, Saitama, 359-1164, Japan
| |
Collapse
|
2
|
Weissman B, Cohn N, Tanner D. The electrophysiology of lexical prediction of emoji and text. Neuropsychologia 2024; 198:108881. [PMID: 38579906 DOI: 10.1016/j.neuropsychologia.2024.108881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 02/18/2024] [Accepted: 03/27/2024] [Indexed: 04/07/2024]
Abstract
As emoji often appear naturally alongside text in utterances, they provide a way to study how prediction unfolds in multimodal sentences in direct comparison to unimodal sentences. In this experiment, participants (N = 40) read sentences in which the sentence-final noun appeared in either word form or emoji form, a between-subjects manipulation. The experiment featured both high constraint sentences and low constraint sentences to examine how the lexical processing of emoji interacts with prediction processes in sentence comprehension. Two well-established ERP components linked to lexical processing and prediction - the N400 and the Late Frontal Positivity - are investigated for sentence-final words and emoji to assess whether, to what extent, and in what linguistic contexts emoji are processed like words. Results indicate that the expected effects, namely an N400 effect to an implausible lexical item compared to a plausible one and an LFP effect to an unexpected lexical item compared to an expected one, emerged for both words and emoji. This paper discusses the similarities and differences between the stimulus types and constraint conditions, contextualized within theories of linguistic prediction, ERP components, and a multimodal lexicon.
Collapse
Affiliation(s)
- Benjamin Weissman
- Department of Cognitive Science Rensselaer Polytechnic Institute 110 8th Street, Troy, NY, 12180, USA; Department of Linguistics University of Illinois at Urbana-Champaign 707 S Mathews Ave, Urbana, IL, 61801, USA.
| | - Neil Cohn
- Department of Communication and Cognition Tilburg University PO Box 90153, 5000, LE Tilburg, the Netherlands
| | - Darren Tanner
- Department of Linguistics University of Illinois at Urbana-Champaign 707 S Mathews Ave, Urbana, IL, 61801, USA; AI For Good Lab Microsoft 1 Microsoft Way, Redmond, WA, USA
| |
Collapse
|
3
|
Hagoort P, Özyürek A. Extending the Architecture of Language From a Multimodal Perspective. Top Cogn Sci 2024. [PMID: 38493475 DOI: 10.1111/tops.12728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 03/19/2024]
Abstract
Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
Collapse
Affiliation(s)
- Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| |
Collapse
|
4
|
Consorti F, Fiorucci S, Martucci G, Lai S. Graphic Novels and Comics in Undergraduate and Graduate Medical Students Education: A Scoping Review. Eur J Investig Health Psychol Educ 2023; 13:2262-2275. [PMID: 37887161 PMCID: PMC10606189 DOI: 10.3390/ejihpe13100160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
There is an increasing use of graphic novels and comics (GnCs) in medical education, especially-but not only-to provide students with a vicarious learning experience in some areas of clinical medicine (palliative care, difficult communication, and rare diseases). This scoping review aimed to answer questions about how GnCs are used, the theories underlying their use, and the learning outcomes. Twenty-nine articles were selected from bibliographic databases and analyzed. A thematic analysis revealed four many themes: learning outcomes, students' reactions, theories and methods, and use of GnCs as vicarious learning. GnCs can support the achievement of cognitive outcomes, as well as soft skills and professionalism. The reactions were engagement and amusement, but drawing comics was also perceived as a protected space to express concerns. GnCs proved to be a possible way to provide a vicarious experience for learning. We found two classes of theories on the use of GnCs: psychological theories based on the dual concurrent coding of text and images and semiotics theories on the interpretation of signs. All the studies but two were single arm and observational, quantitative, qualitative, or mixed. These results suggest that further high-quality research on the use of GnC in medical training is worthwhile.
Collapse
Affiliation(s)
- Fabrizio Consorti
- Department of General Surgery, University Sapienza of Rome, 00185 Rome, Italy
| | | | | | - Silvia Lai
- Department of Translational and Precision Medicine, University Sapienza of Rome, 00185 Rome, Italy;
| |
Collapse
|
5
|
Weissman B, Engelen J, Baas E, Cohn N. The Lexicon of Emoji? Conventionality Modulates Processing of Emoji. Cogn Sci 2023; 47:e13275. [PMID: 37002916 DOI: 10.1111/cogs.13275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 01/14/2023] [Accepted: 03/06/2023] [Indexed: 04/04/2023]
Abstract
Emoji have been ubiquitous in communication for over a decade, yet how they derive meaning remains underexplored. Here, we examine an aspect fundamental to linguistic meaning-making: the degree to which emoji have conventional lexicalized meanings and whether that conventionalization affects processing in real-time. Experiment 1 establishes a range of meaning agreement levels across emoji within a population; Experiment 2 measures accuracy and response times to word-emoji pairings in a match/mismatch task. In this experiment, we found that accuracy and response time both correlated significantly with the level of population-wide meaning agreement from Experiment 1, suggesting that lexical access of single emoji may be comparable to that of words, even out of context. This is consistent with theories of a multimodal lexicon that stores links between meaning, structure, and modality in long-term memory. Altogether, these findings suggest that emoji can allow a range of entrenched, lexicalized representations.
Collapse
Affiliation(s)
| | - Jan Engelen
- Department of Communication and Cognition, Tilburg University
| | - Elise Baas
- Department of Communication and Cognition, Tilburg University
| | - Neil Cohn
- Department of Communication and Cognition, Tilburg University
| |
Collapse
|
6
|
Steciuch CC, Millis K, Kopatich RD. Is viewing a painting like reading a story?: Trans-symbolic comprehension processes and aesthetic responses across two media. DISCOURSE PROCESSES 2023. [DOI: 10.1080/0163853x.2023.2172299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Affiliation(s)
- Christian C. Steciuch
- Department of Psychology, Rockford University
- Center for the Interdisciplinary Study of Language and Literacy
| | - Keith Millis
- Center for the Interdisciplinary Study of Language and Literacy
- Department of Psychology, Northern Illinois University
| | - Ryan D. Kopatich
- Center for the Interdisciplinary Study of Language and Literacy
- Department of Psychology and Neuroscience, Augustana College
| |
Collapse
|
7
|
Cohn N, Schilperoord J. Remarks on Multimodality: Grammatical Interactions in the Parallel Architecture. Front Artif Intell 2022; 4:778060. [PMID: 35059636 PMCID: PMC8764459 DOI: 10.3389/frai.2021.778060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry "semantic weight" for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the "language faculty" itself.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | | |
Collapse
|
8
|
Wannagat W, Waizenegger G, Nieding G. Coherence formation during narrative text processing: a comparison between auditory and audiovisual text presentation in 9- to 12-year-old children. Cogn Process 2021; 22:299-310. [PMID: 33404902 PMCID: PMC8179903 DOI: 10.1007/s10339-020-01008-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 12/07/2020] [Indexed: 11/30/2022]
Abstract
In an experiment with 114 children aged 9-12 years, we compared the ability to establish local and global coherence of narrative texts between auditory and audiovisual (auditory text and pictures) presentation. The participants listened to a series of short narrative texts, in each of which a protagonist pursued a goal. Following each text, we collected the response time to a query word that was either associated with a near or a distant causal antecedent of the final sentence. Analysis of these response times indicated that audiovisual presentation has advantages over auditory presentation for accessing information relevant for establishing both local and global coherence, but there are indications that this effect may be slightly more pronounced for global coherence.
Collapse
Affiliation(s)
- Wienke Wannagat
- Department of Psychology, Developmental Psychology, University of Würzburg, Röntgenring 10, 97070 Würzburg, Germany
| | - Gesine Waizenegger
- Department of Psychology, Developmental Psychology, University of Würzburg, Röntgenring 10, 97070 Würzburg, Germany
| | - Gerhild Nieding
- Department of Psychology, Developmental Psychology, University of Würzburg, Röntgenring 10, 97070 Würzburg, Germany
| |
Collapse
|
9
|
Are emojis processed like words?: Eye movements reveal the time course of semantic processing for emojified text. Psychon Bull Rev 2021; 28:978-991. [PMID: 33511541 DOI: 10.3758/s13423-020-01864-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 11/08/2022]
Abstract
Emojis have many functions that support reading. Most obviously, they convey semantic information and support reading comprehension (Lo, CyberPsychology & Behavior, 11[5], 595-597, 2008; Riordan, Computers in Human Behavior, 76, 75-86, 2017b). However, it is undetermined whether emojis recruit the same perceptual and cognitive processes for identification and integration during reading as do words. To investigate whether emojis are processed like words, we used eye tracking to examine the time course of semantic processing of emojis during reading. Materials consisted of sentences containing a target word (e.g., coffee in the sentence "My tall coffee is just the right temperature") when there was no emoji present and when there was a semantically congruent (i.e., synonymous) emoji (e.g., the cup of coffee emoji, ) or an incongruent emoji (e.g., the beer mug emoji, ) present at the end of the sentence. Similar to congruency effects with words, congruent emojis were fixated for shorter periods and were less likely to be refixated than were incongruent emojis. In addition, congruent emojis were more frequently skipped than incongruent emojis, which suggests that semantic aspects of emoji processing begin in the parafovea. Finally, the presence of an emoji, relative to its absence increased target-word skipping rates and reduced total time on target words. We discuss the implications of our findings for models of eye-movement control during reading.
Collapse
|
10
|
Texts and pictures serve different functions in conjoint mental model construction and adaptation. Mem Cognit 2020; 48:69-82. [PMID: 31372846 DOI: 10.3758/s13421-019-00962-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study we examined the different functions of text and pictures during text-picture integration in multimedia learning. In Study 1, 144 secondary school students (age = 11 to 14 years; 72 females, 72 males) received six text-picture units under two conditions. In the delayed-question condition, students first read the units without a specific question (no-question phase), to stimulate initial coherence-oriented mental model construction. Afterward the question was presented (question-answering phase), to stimulate task-adaptive mental model specification. In the preposed-question condition, students received a specific question from the beginning, stimulating both kinds of processing. Analyses of the participants' eye movement patterns confirmed the assumption that students allocated a higher percentage of available resources to text processing during the initial mental model construction than during adaptive model specification. Conversely, students allocated a higher percentage of available resources to picture processing during adaptive mental model specification than during the initial mental model construction. In Study 2 (N = 12, age = 12 to 16; seven females, five males), we ruled out that these findings were due to the effect of rereading, by implementing a no-question phase either once or twice. To sum up, texts seem to provide more explicit conceptual guidance in mental model construction than pictures do, whereas pictures support mental model adaptation more than text does, by providing flexible access to specific information for task-oriented updates.
Collapse
|
11
|
Children's comprehension of narrative texts: Protagonists’ goals and mental representation of coherence relations. COGNITIVE DEVELOPMENT 2020. [DOI: 10.1016/j.cogdev.2020.100966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
12
|
Rossi S, Rossi A, Dautenhahn K. The Secret Life of Robots: Perspectives and Challenges for Robot’s Behaviours During Non-interactive Tasks. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00650-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
13
|
Petrova TE, Riekhakaynen EI, Bratash VS. An Eye-Tracking Study of Sketch Processing: Evidence From Russian. Front Psychol 2020; 11:297. [PMID: 32194475 PMCID: PMC7061926 DOI: 10.3389/fpsyg.2020.00297] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 02/07/2020] [Indexed: 12/05/2022] Open
Abstract
This study investigates the online process of reading and analyzing of sketchnotes (visual notes containing a handwritten text and drawings) on Russian language material. Using the eye-tracking method, we compared the processing of different types of sketchnotes [“path” (trajectory), linear, and radial] and the processing of a verbal text. Biographies of Russian writers were used as the material. In a preliminary experiment, we asked 89 college students to read the biographies and to evaluate each text or sketch using five scales (from −2 to +2). The best example for each of three formats of sketchnotes and a verbal text was chosen. In the main experiment, 21 secondary school students examined four different biographies in four different formats (three sketchnotes and a verbal text), answered to the factual and analytical questions to these texts and estimated the difficulty of each text. We measured the total dwell time, the total fixation count, the average fixation duration for each stimulus as well as for separate zones inside the sketches including verbal and non-verbal information. Our results show that readers process the information better and faster while reading sketchnotes than a verbal text. In the trajectory sketchnotes, the readers followed the order of elements aimed by the author of the sketchnotes better than in the radial and linear sketchnotes. The analysis of participants’ eye movements while processing the stimuli made it possible to propose several recommendations for creating effective sketchnotes.
Collapse
Affiliation(s)
- Tatiana E Petrova
- Laboratory for Cognitive Studies, Saint-Petersburg State University, Saint Petersburg, Russia
| | - Elena I Riekhakaynen
- Department of General Linguistics, Saint-Petersburg State University, Saint Petersburg, Russia
| | - Valentina S Bratash
- Department of Education, Saint-Petersburg State University, Saint Petersburg, Russia
| |
Collapse
|
14
|
Cohn N, Magliano JP. Editors’ Introduction and Review: Visual Narrative Research: An Emerging Field in Cognitive Science. Top Cogn Sci 2019; 12:197-223. [PMID: 31865641 PMCID: PMC9328199 DOI: 10.1111/tops.12473] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 09/29/2019] [Accepted: 09/29/2019] [Indexed: 01/06/2023]
Abstract
Drawn sequences of images are among our oldest records of human intelligence, appearing on cave paintings, wall carvings, and ancient pottery, and they pervade across cultures from instruction manuals to comics. They also appear prevalently as stimuli across Cognitive Science, for studies of temporal cognition, event structure, social cognition, discourse, and basic intelligence. Yet, despite this fundamental place in human expression and research on cognition, the study of visual narratives themselves has only recently gained traction in Cognitive Science. This work has suggested that visual narrative comprehension requires cultural exposure across a developmental trajectory and engages with domain‐general processing mechanisms shared by visual perception, attention, event cognition, and language, among others. Here, we review the relevance of such research for the broader Cognitive Science community, and make the case for why researchers should join the scholarship of this ubiquitous but understudied aspect of human expression. Drawn sequences of images, like those in comics and picture stories, are a pervasive and fundamental way that humans have communicated for millennia. Yet, the study of visual narratives has only recently gained traction in Cognitive Science. Here we explore what has held back the study of the cognition of visual narratives, and why researchers should join in scholarship of this ubiquitous aspect of expression.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communciation and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg Center for Cognition and Communication, Tilburg Unviersity
| | - Joseph P. Magliano
- Department of Learning Sciences at the College of Education & Human Development, Georgia State University
| |
Collapse
|
15
|
Laubrock J, Dunst A. Computational Approaches to Comics Analysis. Top Cogn Sci 2019; 12:274-310. [PMID: 31705626 DOI: 10.1111/tops.12476] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Revised: 08/17/2019] [Accepted: 08/27/2019] [Indexed: 11/29/2022]
Abstract
Comics are complex documents whose reception engages cognitive processes such as scene perception, language processing, and narrative understanding. Possibly because of their complexity, they have rarely been studied in cognitive science. Modeling the stimulus ideally requires a formal description, which can be provided by feature descriptors from computer vision and computational linguistics. With a focus on document analysis, here we review work on the computational modeling of comics. We argue that the development of modern feature descriptors based on deep learning techniques has made sufficient progress to allow the investigation of complex material such as comics for reception studies, including experimentation and computational modeling of cognitive processes.
Collapse
Affiliation(s)
| | - Alexander Dunst
- Department of English and American Studies, University of Paderborn
| |
Collapse
|
16
|
Loschky LC, Larson AM, Smith TJ, Magliano JP. The Scene Perception & Event Comprehension Theory (SPECT) Applied to Visual Narratives. Top Cogn Sci 2019; 12:311-351. [PMID: 31486277 PMCID: PMC9328418 DOI: 10.1111/tops.12455] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/05/2019] [Accepted: 08/05/2019] [Indexed: 11/29/2022]
Abstract
Understanding how people comprehend visual narratives (including picture stories, comics, and film) requires the combination of traditionally separate theories that span the initial sensory and perceptual processing of complex visual scenes, the perception of events over time, and comprehension of narratives. Existing piecemeal approaches fail to capture the interplay between these levels of processing. Here, we propose the Scene Perception & Event Comprehension Theory (SPECT), as applied to visual narratives, which distinguishes between front‐end and back‐end cognitive processes. Front‐end processes occur during single eye fixations and are comprised of attentional selection and information extraction. Back‐end processes occur across multiple fixations and support the construction of event models, which reflect understanding of what is happening now in a narrative (stored in working memory) and over the course of the entire narrative (stored in long‐term episodic memory). We describe relationships between front‐ and back‐end processes, and medium‐specific differences that likely produce variation in front‐end and back‐end processes across media (e.g., picture stories vs. film). We describe several novel research questions derived from SPECT that we have explored. By addressing these questions, we provide greater insight into how attention, information extraction, and event model processes are dynamically coordinated to perceive and understand complex naturalistic visual events in narratives and the real world. Comprehension of visual narratives like comics, picture stories, and films involves both decoding the visual content and construing the meaningful events they represent. The Scene Perception & Event Comprehension Theory (SPECT) proposes a framework for understanding how a comprehender perceptually negotiates the surface of a visual representation and integrates its meaning into a growing mental model.
Collapse
Affiliation(s)
| | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
17
|
Cohn N, Engelen J, Schilperoord J. The grammar of emoji? Constraints on communicative pictorial sequencing. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:33. [PMID: 31471857 PMCID: PMC6717234 DOI: 10.1186/s41235-019-0177-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 06/03/2019] [Indexed: 11/10/2022]
Abstract
Emoji have become a prominent part of interactive digital communication. Here, we ask the questions: does a grammatical system govern the way people use emoji; and how do emoji interact with the grammar of written text? We conducted two experiments that asked participants to have a digital conversation with each other using only emoji (Experiment 1) or to substitute at least one emoji for a word in the sentences (Experiment 2). First, we found that the emoji-only utterances of participants remained at simplistic levels of patterning, primarily appearing as one-unit utterances (as formulaic expressions or responsive emotions) or as linear sequencing (for example, repeating the same emoji or providing an unordered list of semantically related emoji). Emoji playing grammatical roles (i.e., 'parts-of-speech') were minimal, and showed little consistency in 'word order'. Second, emoji were substituted more for nouns and adjectives than verbs, while also typically conveying nonredundant information to the sentences. These findings suggest that, while emoji may follow tendencies in their interactions with grammatical structure in multimodal text-emoji productions, they lack grammatical structure on their own.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands.
| | - Jan Engelen
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| | - Joost Schilperoord
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| |
Collapse
|
18
|
Kopatich RD, Feller DP, Kurby CA, Magliano JP. The role of character goals and changes in body position in the processing of events in visual narratives. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:22. [PMID: 31286278 PMCID: PMC6614232 DOI: 10.1186/s41235-019-0176-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Accepted: 05/31/2019] [Indexed: 11/10/2022]
Abstract
BACKGROUND A growing body of research is beginning to understand how people comprehend sequential visual narratives. However, previous work has used materials that primarily rely on visual information (i.e., they contain minimal language information). The current work seeks to address how visual and linguistic information streams are coordinated in sequential image comprehension. In experiment 1, participants viewed picture stories and engaged in an event segmentation task. The extent to which critical points in the narrative depicted situational continuity of character goals and continuity in bodily position was manipulated. The likelihood of perceiving an event boundary and viewing latencies at critical locations were measured. Experiment 1 was replicated in the second experiment, without the segmentation task. That is, participants read the picture stories without deciding where the event boundaries occurred. RESULTS Experiment 1 indicated that changes in character goals were associated with an increased likelihood of segmenting at the critical point, but changes in bodily position were not. A follow-up analysis, however, revealed that over the course of the entire story, changes in body position were a significant predictor of event segmentation. Viewing time, however, was affected by both goal and body position shifts. Experiment 2 corroborated the finding that viewing time was affected by changes in goals and body positions. CONCLUSION The current study shows that changes in body position influence a viewer's perception of event structure and event processing. This fits into a growing body of research that attempts to understand how consumers of multimodal media coordinate multiple information streams. The current study underscores the need for the systematic study of the visual, perceptual, and comprehension processes that occur during visual narrative understanding.
Collapse
|
19
|
Cohn N. Your Brain on Comics: A Cognitive Model of Visual Narrative Comprehension. Top Cogn Sci 2019; 12:352-386. [PMID: 30963724 PMCID: PMC9328425 DOI: 10.1111/tops.12421] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 01/21/2019] [Accepted: 03/18/2019] [Indexed: 11/30/2022]
Abstract
The past decade has seen a rapid growth of cognitive and brain research focused on visual narratives like comics and picture stories. This paper will summarize and integrate this emerging literature into the Parallel Interfacing Narrative‐Semantics Model (PINS Model)—a theory of sequential image processing characterized by an interaction between two representational levels: semantics and narrative structure. Ongoing semantic processes build meaning into an evolving mental model of a visual discourse. Updating of spatial, referential, and event information then incurs costs when they are discontinuous with the growing context. In parallel, a narrative structure organizes semantic information into coherent sequences by assigning images to categorical roles, which are then embedded within a hierarchic constituent structure. Narrative constructional schemas allow for specific predictions of structural sequencing, independent of semantics. Together, these interacting levels of representation engage in an iterative process of retrieval of semantic and narrative information, prediction of upcoming information based on those assessments, and subsequent updating based on discontinuity. These core mechanisms are argued to be domain‐general—spanning across expressive systems—as suggested by similar electrophysiological brain responses (N400, P600, anterior negativities) generated in response to manipulation of sequential images, music, and language. Such similarities between visual narratives and other domains thus pose fundamental questions for the linguistic and cognitive sciences. Visual narratives like comics involve a range of complex cognitive operations in order to be understood. The Parallel Interfacing Narrative‐Semantics (PINS) Model integrates an emerging literature showing that comprehension of wordless image sequences balances two representational levels of semantic and narrative structure. The neurocognitive mechanisms that guide these processes are argued to overlap with other domains, such as language and music.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg University
| |
Collapse
|
20
|
Cohn N. Visual narratives and the mind: Comprehension, cognition, and learning. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
21
|
Weissman B, Tanner D. A strong wink between verbal and emoji-based irony: How the brain processes ironic emojis during language comprehension. PLoS One 2018; 13:e0201727. [PMID: 30110375 PMCID: PMC6093662 DOI: 10.1371/journal.pone.0201727] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 07/20/2018] [Indexed: 11/19/2022] Open
Abstract
Emojis are ideograms that are becoming ubiquitous in digital communication. However, no research has yet investigated how humans process semantic and pragmatic content of emojis in real time. We investigated neural responses to irony-producing emojis, the question being whether emoji-generated irony is processed similarly to word-generated irony. Previous ERP studies have routinely found P600 effects to verbal irony. Our research sought to identify whether the same neural responses could also be elicited by emoji-induced irony. In three experiments, participants read sentences that ended in either a congruent, incongruent, or ironic (wink) emoji. Results across all three experiments demonstrated clear P600 effects, the amplitudes of which were correlated with participants' tendency to treat the emoji as a marker of irony, as indicated by behavioral comprehension question responses. These ironic wink emojis also elicited a strong P200 effect, also found in studies of verbal irony processing. Moreover, unexpected emojis (both mismatch and ironic emoji) also elicited late frontal positivities, which have been implicated processing unpredicted words in context. These results are the first to identify how linguistically-relevant ideograms are processed in real-time at the neural level, and specifically draw parallels between the processing of word- and emoji-induced irony.
Collapse
Affiliation(s)
- Benjamin Weissman
- Department of Linguistics, University of Illinois, Urbana, Illinois, United States of America
| | - Darren Tanner
- Department of Linguistics, University of Illinois, Urbana, Illinois, United States of America
| |
Collapse
|
22
|
Tanner D, Goldshtein M, Weissman B. Individual Differences in the Real-Time Neural Dynamics of Language Comprehension. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.08.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
23
|
Manfredi M, Cohn N, Kutas M. When a hit sounds like a kiss: An electrophysiological exploration of semantic processing in visual narrative. BRAIN AND LANGUAGE 2017; 169:28-38. [PMID: 28242517 PMCID: PMC5465314 DOI: 10.1016/j.bandl.2017.02.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 02/02/2017] [Accepted: 02/07/2017] [Indexed: 06/06/2023]
Abstract
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms.
Collapse
Affiliation(s)
- Mirella Manfredi
- Department of Psychology, University of Milano-Bicocca, Milan, Italy; Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil.
| | - Neil Cohn
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA, USA; Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, Netherlands
| | - Marta Kutas
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
24
|
Affiliation(s)
- Paul A. Aleixo
- Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield, UK
| | - Krystina Sumner
- Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield, UK
| |
Collapse
|