1
|
Andrade MÂ, Cipriano M, Raposo A. ObScene database: Semantic congruency norms for 898 pairs of object-scene pictures. Behav Res Methods 2024; 56:3058-3071. [PMID: 37488464 PMCID: PMC11133025 DOI: 10.3758/s13428-023-02181-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 07/26/2023]
Abstract
Research on the interaction between object and scene processing has a long history in the fields of perception and visual memory. Most databases have established norms for pictures where the object is embedded in the scene. In this study, we provide a diverse and controlled stimulus set comprising real-world pictures of 375 objects (e.g., suitcase), 245 scenes (e.g., airport), and 898 object-scene pairs (e.g., suitcase-airport), with object and scene presented separately. Our goal was twofold. First, to create a database of object and scene pictures, normed for the same variables to have comparable measures for both types of pictures. Second, to acquire normative data for the semantic relationships between objects and scenes presented separately, which offers more flexibility in the use of the pictures and allows disentangling the processing of the object and its context (the scene). Along three experiments, participants evaluated each object or scene picture on name agreement, familiarity, and visual complexity, and rated object-scene pairs on semantic congruency. A total of 125 septuplets of one scene and six objects (three congruent, three incongruent), and 120 triplets of one object and two scenes (in congruent and incongruent pairings) were built. In future studies, these objects and scenes can be used separately or combined, while controlling for their key features. Additionally, as object-scene pairs received semantic congruency ratings along the entire scale, researchers may select among a wide range of congruency values. ObScene is a comprehensive and ecologically valid database, useful for psychology and neuroscience studies of visual object and scene processing.
Collapse
Affiliation(s)
- Miguel Ângelo Andrade
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Margarida Cipriano
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Ana Raposo
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
2
|
Cipriano M, Carneiro P, Albuquerque PB, Pinheiro AP, Lindner I. Stimuli in 3 Acts: A normative study on action-statements, action videos and object photos. Behav Res Methods 2023; 55:3504-3512. [PMID: 36131196 DOI: 10.3758/s13428-022-01972-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2022] [Indexed: 11/08/2022]
Abstract
The study of action observation and imagery, separately and combined, is expanding in diverse research areas (e.g., sports psychology, neurosciences), making clear the need for action-related stimuli (i.e., action statements, videos, and pictures). Although several databases of object and action pictures are available, norms on action videos are scarce. In this study, we validated a set of 60 object-related everyday actions in three different formats: action-statements, and corresponding dynamic (action videos) and static (object photos) stimuli. In Study 1, ratings of imageability, image agreement, action familiarity, action frequency, and action valence were collected from 161 participants. In Study 2, a different sample of 115 participants rated object familiarity, object valence, and object-action prototypicality. Most actions were rated as easy to imagine, familiar, and neutral or positive in valence. However, there was variation in the frequency with which participants perform these actions on a daily basis. High agreement between participants' mental image and action videos was also found, showing that the videos depict a conventional way of performing the actions. Objects were considered familiar and positive in valence. High ratings on object-action prototypicality indicate that the actions correspond to prototypical actions for most objects. 3ActStimuli is a comprehensive set of stimuli that can be useful in several research areas, allowing the combined study of action observation and imagery.
Collapse
Affiliation(s)
- Margarida Cipriano
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Paula Carneiro
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | | | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Isabel Lindner
- Universität Kassel, Institut für Psychologie, Kassel, Germany
| |
Collapse
|
3
|
You won't believe what this guy is doing with the potato: The ObjAct stimulus-set depicting human actions on congruent and incongruent objects. Behav Res Methods 2021; 53:1895-1909. [PMID: 33634424 PMCID: PMC8516756 DOI: 10.3758/s13428-021-01540-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2021] [Indexed: 01/24/2023]
Abstract
Perception famously involves both bottom-up and top-down processes. The latter are influenced by our previous knowledge and expectations about the world. In recent years, many studies have focused on the role of expectations in perception in general, and in object processing in particular. Yet studying this question is not an easy feat, requiring-among other things-the creation and validation of appropriate stimuli. Here, we introduce the ObjAct stimulus-set of free-to-use, highly controlled real-life scenes, on which critical objects are pasted. All scenes depict human agents performing an action with an object that is either congruent or incongruent with the action. The focus on human actions yields highly constraining contexts, strengthening congruency effects. The stimuli were analyzed for low-level properties, using the SHINE toolbox to control for luminance and contrast, and using a deep convolutional neural network to mimic V1 processing and potentially discover other low-level factors that might differ between congruent and incongruent scenes. Two online validation studies (N = 500) were also conducted to assess the congruency manipulation and collect additional ratings of our images (e.g., arousal, likeability, visual complexity). We also provide full descriptions of the online sources from which all images were taken, as well as verbal descriptions of their content. Taken together, this extensive validation and characterization procedure makes the ObjAct stimulus-set highly informative and easy to use for future researchers in multiple fields, from object and scene processing, through top-down contextual effects, to the study of actions.
Collapse
|
4
|
|
5
|
Time-Frequency Analysis of Mu Rhythm Activity during Picture and Video Action Naming Tasks. Brain Sci 2017; 7:brainsci7090114. [PMID: 28878193 PMCID: PMC5615255 DOI: 10.3390/brainsci7090114] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 08/24/2017] [Accepted: 08/30/2017] [Indexed: 11/25/2022] Open
Abstract
This study used whole-head 64 channel electroencephalography to measure changes in sensorimotor activity—as indexed by the mu rhythm—in neurologically-healthy adults, during subvocal confrontation naming tasks. Independent component analyses revealed sensorimotor mu component clusters in the right and left hemispheres. Event related spectral perturbation analyses indicated significantly stronger patterns of mu rhythm activity (pFDR < 0.05) during the video condition as compared to the picture condition, specifically in the left hemisphere. Mu activity is hypothesized to reflect typical patterns of sensorimotor activation during action verb naming tasks. These results support further investigation into sensorimotor cortical activity during action verb naming in clinical populations.
Collapse
|
6
|
The Novel Object and Unusual Name (NOUN) Database: A collection of novel images for use in experimental research. Behav Res Methods 2015; 48:1393-1409. [DOI: 10.3758/s13428-015-0647-3] [Citation(s) in RCA: 126] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Buffat S, Chastres V, Bichot A, Rider D, Benmussa F, Lorenceau J. OB3D, a new set of 3D objects available for research: a web-based study. Front Psychol 2014; 5:1062. [PMID: 25339920 PMCID: PMC4186308 DOI: 10.3389/fpsyg.2014.01062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 09/04/2014] [Indexed: 11/13/2022] Open
Abstract
Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc.
Collapse
Affiliation(s)
- Stéphane Buffat
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France ; Cognition and Action Group, Cognac G, Service de Santé des Armées, Centre National de la Recherche Scientifique, Université Paris Descartes, Unités Mixtes de Recherche-MD 4 - 8257 Paris, France
| | - Véronique Chastres
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France
| | - Alain Bichot
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France
| | - Delphine Rider
- Centre National de la Recherche Scientifique, Unités Mixtes de Service Relais d'Information sur les Sciences de la Cognition 3332 Paris, France
| | - Frédéric Benmussa
- Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, Unités Mixtes de Recherche-8248, Centre National de la Recherche Scientifique, École Normale Supérieure Paris, France
| | - Jean Lorenceau
- Centre National de la Recherche Scientifique, Unités Mixtes de Service Relais d'Information sur les Sciences de la Cognition 3332 Paris, France ; Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, Unités Mixtes de Recherche-8248, Centre National de la Recherche Scientifique, École Normale Supérieure Paris, France
| |
Collapse
|
8
|
Abstract
We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions-walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting-while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.
Collapse
|
9
|
Brielmann AA, Stolarova M. A new standardized stimulus set for studying need-of-help recognition (NeoHelp). PLoS One 2014; 9:e84373. [PMID: 24409294 PMCID: PMC3883661 DOI: 10.1371/journal.pone.0084373] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2013] [Accepted: 11/22/2013] [Indexed: 11/20/2022] Open
Abstract
This article presents the NeoHelp visual stimulus set created to facilitate investigation of need-of-help recognition with clinical and normative populations of different ages, including children. Need-of-help recognition is one aspect of socioemotional development and a necessary precondition for active helping. The NeoHelp consists of picture pairs showing everyday situations: The first item in a pair depicts a child needing help to achieve a goal; the second one shows the child achieving the goal. Pictures of birds in analogue situations are also included. These control stimuli enable implementation of a human-animal categorization task which serves to separate behavioral correlates specific to need-of-help recognition from general differentiation processes. It is a concern in experimental research to ensure that results do not relate to systematic perceptual differences when comparing responses to categories of different content. Therefore, we not only derived the NeoHelp-pictures within a pair from one another by altering as little as possible, but also assessed their perceptual similarity empirically. We show that NeoHelp-picture pairs are very similar regarding low-level perceptual properties across content categories. We obtained data from 60 children in a broad age range (4 to 13 years) for three different paradigms, in order to assess whether the intended categorization and differentiation could be observed reliably in a normative population. Our results demonstrate that children can differentiate the pictures' content regarding both need-of-help category as well as species as intended in spite of the high perceptual similarities. We provide standard response characteristics (hit rates and response times) that are useful for future selection of stimuli and comparison of results across studies. We show that task requirements coherently determine which aspects of the pictures influence response characteristics. Thus, we present NeoHelp, the first open-access standardized visual stimuli set for investigation of need-of-help recognition and invite researchers to use and extend it.
Collapse
Affiliation(s)
- Aenne A. Brielmann
- Department of Psychology and Zukunftskolleg, University of Konstanz, Konstanz, Germany
| | - Margarita Stolarova
- Department of Psychology and Zukunftskolleg, University of Konstanz, Konstanz, Germany
- Faculty of Society and Economics, Rhine-Waal University of Applied Sciences, Kleve, Germany
| |
Collapse
|
10
|
Umla-Runge K, Fu X, Wang L, Zimmer HD. Culture-specific familiarity equally mediates action representations across cultures. Cogn Neurosci 2013; 5:26-35. [PMID: 24168196 DOI: 10.1080/17588928.2013.834318] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Previous studies have shown that we need to distinguish between means and end information about actions. It is unclear how these two subtypes of action information relate to each other with theoretical accounts postulating the superiority of end over means information and others linking separate means and end routes of processing to actions of differential meaningfulness. Action meaningfulness or familiarity differs between cultures. In a cross-cultural setting, we investigated how action familiarity influences recognition memory for means and end information. Object directed actions of differential familiarity were presented to Chinese and German participants. Action familiarity modulated the representation of means and end information in both cultures in the same way, although the effects were based on different stimulus sets. Our results suggest that, in the representation of actions in memory, end information is superordinate to means information. This effect is independent of culture whereas action familiarity is not.
Collapse
Affiliation(s)
- Katja Umla-Runge
- a Brain & Cognition Unit, Department of Psychology , Saarland University , Saarbruecken , Germany
| | | | | | | |
Collapse
|