1
|
Kamohara C, Nakajima M, Nozaki Y, Ieda T, Kawamura K, Horikoshi K, Miyahara R, Akiba C, Ogino I, Karagiozov KL, Miyajima M, Kondo A, Sakamoto M. A new test for evaluation of marginal cognitive function deficits in idiopathic normal pressure hydrocephalus through expressing texture recognition by sound symbolic words. Front Aging Neurosci 2024; 16:1456242. [PMID: 39360232 PMCID: PMC11445636 DOI: 10.3389/fnagi.2024.1456242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 09/03/2024] [Indexed: 10/04/2024] Open
Abstract
Introduction The number of dementia patients is increasing with population aging. Preclinical detection of dementia in patients is essential for access to adequate treatment. In previous studies, dementia patients showed texture recognition difficulties. Onomatopoeia or sound symbolic words (SSW) are intuitively associated with texture impressions and are less likely to be affected by aphasia and description of material perception can be easily obtained. In this study, we aimed to create a test of texture recognition ability expressed by SSW to detect the presence of mild cognitive disorders. Methods The sound symbolic words texture recognition test (SSWTRT) is constructed from 12 close-up photos of various materials and participants were to choose the best SSW out of 8 choices to describe surface texture in the images in Japanese. All 102 participants seen in Juntendo University Hospital from January to August 2023 had a diagnosis of possible iNPH (age mean 77.9, SD 6.7). The answers were scored on a comprehensive scale of 0 to 1. Neuropsychological assessments included MMSE, FAB, and the Rey Auditory Verbal Learning Test (RAVLT), Pegboard Test, and Stroop Test from the EU-iNPH Grading Scale (GS). In study 1 the correlation between SSWTRT and the neuropsychological tests were analyzed. In study 2, participants were divided into two groups: the Normal Cognition group (Group A, n = 37) with MMSE scores of 28 points or above, and the Mild Cognitive Impairment group (Group B, n = 50) with scores ranging from 22 to 27 points, and its predictability were analyzed. Results In study 1, the total SSWTRT score had a moderate correlation with the neuropsychological test results. In study 2, there were significant differences in the SSWTRT scores between groups A and B. ROC analysis results showed that the SSWTR test was able to predict the difference between the normal and mildly impaired cognition groups. Conclusion The developed SSWTRT reflects the assessment results of neuropsychological tests in cognitive deterioration and was able to detect early cognitive deficits. This test not only relates to visual perception but is likely to have an association with verbal fluency and memory ability, which are frontal lobe functions.
Collapse
Affiliation(s)
- Chihiro Kamohara
- Research Institute for Diseases of Old Age, Juntendo University School of Medicine, Tokyo, Japan
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Madoka Nakajima
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Yuji Nozaki
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Taiki Ieda
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kaito Kawamura
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
- Department of Neurosurgery, Saiseikai Kawaguchi General Hospital, Saitama, Japan
| | - Kou Horikoshi
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Ryo Miyahara
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Chihiro Akiba
- Department of Neurosurgery, Juntendo Koto Geriatric Medical Center, Tokyo, Japan
| | - Ikuko Ogino
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | | | - Masakazu Miyajima
- Department of Neurosurgery, Juntendo Koto Geriatric Medical Center, Tokyo, Japan
| | - Akihide Kondo
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Maki Sakamoto
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
2
|
Tanaka M, Aketagawa K, Horiuchi T. Impact of Display Sub-Pixel Arrays on Perceived Gloss and Transparency. J Imaging 2024; 10:221. [PMID: 39330441 PMCID: PMC11432855 DOI: 10.3390/jimaging10090221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 08/30/2024] [Accepted: 09/05/2024] [Indexed: 09/28/2024] Open
Abstract
In recent years, improvements in display image quality have made it easier to perceive rich object information, such as gloss and transparency, from images, known as shitsukan. Do the different display specifications in the world affect their appearance? Clarifying the effects of differences in pixel structure on shitsukan perception is necessary to realize shitsukan management for displays with different hardware structures, which has not been fully clarified before. In this study, we experimentally investigated the effects of display pixel arrays on the perception of glossiness and transparency. In a visual evaluation experiment, we investigated the effects of three types of sub-pixel arrays (RGB, RGBW, and PenTile) on the perception of glossiness and transparency using natural images. The results confirmed that sub-pixel arrays affect the appearance of glossiness and transparency. A general relationship of RGB > PenTile > RGBW for glossiness and RGB > RGBW > PenTile for transparency was found; however, detailed analysis, such as cluster analysis, confirmed that the relative superiority of these sub-pixel arrays may vary depending on the observer and image content.
Collapse
Affiliation(s)
- Midori Tanaka
- Graduate School of Informatics, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan;
- Graduate School of Science and Engineering, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan;
| | - Kosei Aketagawa
- Graduate School of Science and Engineering, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan;
| | - Takahiko Horiuchi
- Graduate School of Informatics, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan;
- Graduate School of Science and Engineering, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan;
| |
Collapse
|
3
|
Filip J, Lukavský J, Dechterenko F, Schmidt F, Fleming RW. Perceptual dimensions of wood materials. J Vis 2024; 24:12. [PMID: 38787569 PMCID: PMC11129719 DOI: 10.1167/jov.24.5.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 04/01/2024] [Indexed: 05/25/2024] Open
Abstract
Materials exhibit an extraordinary range of visual appearances. Characterizing and quantifying appearance is important not only for basic research on perceptual mechanisms but also for computer graphics and a wide range of industrial applications. Although methods exist for capturing and representing the optical properties of materials and how they vary across surfaces (Haindl & Filip, 2013), the representations are typically very high-dimensional, and how these representations relate to subjective perceptual impressions of material appearance remains poorly understood. Here, we used a data-driven approach to characterizing the perceived appearance characteristics of 30 samples of wood veneer using a "visual fingerprint" that describes each sample as a multidimensional feature vector, with each dimension capturing a different aspect of the appearance. Fifty-six crowd-sourced participants viewed triplets of movies depicting different wood samples as the sample rotated. Their task was to report which of the two match samples was subjectively most similar to the test sample. In another online experiment, 45 participants rated 10 wood-related appearance characteristics for each of the samples. The results reveal a consistent embedding of the samples across both experiments and a set of nine perceptual dimensions capturing aspects including the roughness, directionality, and spatial scale of the surface patterns. We also showed that a weighted linear combination of 11 image statistics, inspired by the rating characteristics, predicts perceptual dimensions well.
Collapse
Affiliation(s)
- Jirí Filip
- The Czech Academy of Sciences, Institute of Information Theory and Automation, Prague, Czech Republic
| | - Jirí Lukavský
- The Czech Academy of Sciences, Institute of Psychology, Prague, Czech Republic
| | - Filip Dechterenko
- The Czech Academy of Sciences, Institute of Psychology, Prague, Czech Republic
| | - Filipp Schmidt
- Experimental Psychology, Justus Liebig University Giessen Germany, Giessen, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Germany
| | - Roland W Fleming
- Experimental Psychology, Justus Liebig University Giessen Germany, Giessen, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Germany
| |
Collapse
|
4
|
Tominaga S, Horiuchi T. High Dynamic Range Image Reconstruction from Saturated Images of Metallic Objects. J Imaging 2024; 10:92. [PMID: 38667990 PMCID: PMC11051178 DOI: 10.3390/jimaging10040092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/12/2024] [Accepted: 04/12/2024] [Indexed: 04/28/2024] Open
Abstract
This study considers a method for reconstructing a high dynamic range (HDR) original image from a single saturated low dynamic range (LDR) image of metallic objects. A deep neural network approach was adopted for the direct mapping of an 8-bit LDR image to HDR. An HDR image database was first constructed using a large number of various metallic objects with different shapes. Each captured HDR image was clipped to create a set of 8-bit LDR images. All pairs of HDR and LDR images were used to train and test the network. Subsequently, a convolutional neural network (CNN) was designed in the form of a deep U-Net-like architecture. The network consisted of an encoder, a decoder, and a skip connection to maintain high image resolution. The CNN algorithm was constructed using the learning functions in MATLAB. The entire network consisted of 32 layers and 85,900 learnable parameters. The performance of the proposed method was examined in experiments using a test image set. The proposed method was also compared with other methods and confirmed to be significantly superior in terms of reconstruction accuracy, histogram fitting, and psychological evaluation.
Collapse
Affiliation(s)
- Shoji Tominaga
- Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
- Department of Business and Informatics, Nagano University, Ueda 386-0032, Japan
| | - Takahiko Horiuchi
- Graduate School of Engineering, Chiba University, Chiba 263-8522, Japan;
| |
Collapse
|
5
|
Keshvari S, Wijntjes MWA. Peripheral material perception. J Vis 2024; 24:13. [PMID: 38625088 PMCID: PMC11033595 DOI: 10.1167/jov.24.4.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 02/19/2024] [Indexed: 04/17/2024] Open
Abstract
Humans can rapidly identify materials, such as wood or leather, even within a complex visual scene. Given a single image, one can easily identify the underlying "stuff," even though a given material can have highly variable appearance; fabric comes in unlimited variations of shape, pattern, color, and smoothness, yet we have little trouble categorizing it as fabric. What visual cues do we use to determine material identity? Prior research suggests that simple "texture" features of an image, such as the power spectrum, capture information about material properties and identity. Few studies, however, have tested richer and biologically motivated models of texture. We compared baseline material classification performance to performance with synthetic textures generated from the Portilla-Simoncelli model and several common image degradations. The textures retain statistical information but are otherwise random. We found that performance with textures and most degradations was well below baseline, suggesting insufficient information to support foveal material perception. Interestingly, modern research suggests that peripheral vision might use a statistical, texture-like representation. In a second set of experiments, we found that peripheral performance is more closely predicted by texture and other image degradations. These findings delineate the nature of peripheral material classification.
Collapse
Affiliation(s)
| | - Maarten W A Wijntjes
- Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
6
|
Tsuda H, Kawabata H. materialmodifier: An R package of photo editing effects for material perception research. Behav Res Methods 2024; 56:2657-2674. [PMID: 37162649 PMCID: PMC10991072 DOI: 10.3758/s13428-023-02116-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/27/2023] [Indexed: 05/11/2023]
Abstract
In this paper, we introduce an R package that performs automated photo editing effects. Specifically, it is an R implementation of an image-processing algorithm proposed by Boyadzhiev et al. (2015). The software allows the user to manipulate the appearance of objects in photographs, such as emphasizing facial blemishes and wrinkles, smoothing the skin, or enhancing the gloss of fruit. It provides a reproducible method to quantitatively control specific surface properties of objects (e.g., gloss and roughness), which is useful for researchers interested in topics related to material perception, from basic mechanisms of perception to the aesthetic evaluation of faces and objects. We describe the functionality, usage, and algorithm of the method, report on the findings of a behavioral evaluation experiment, and discuss its usefulness and limitations for psychological research. The package can be installed via CRAN, and documentation and source code are available at https://github.com/tsuda16k/materialmodifier .
Collapse
Affiliation(s)
- Hiroyuki Tsuda
- Faculty of Psychology, Doshisha University, Kyoto, Japan.
| | - Hideaki Kawabata
- Department of Psychology, Faculty of Letters, Keio University, Tokyo, Japan.
| |
Collapse
|
7
|
Strappini F, Fagioli S, Mastandrea S, Scorolli C. Sustainable materials: a linking bridge between material perception, affordance, and aesthetics. Front Psychol 2024; 14:1307467. [PMID: 38259544 PMCID: PMC10800687 DOI: 10.3389/fpsyg.2023.1307467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
The perception of material properties, which refers to the way in which individuals perceive and interpret materials through their sensory experiences, plays a crucial role in our interaction with the environment. Affordance, on the other hand, refers to the potential actions and uses that materials offer to users. In turn, the perception of the affordances is modulated by the aesthetic appreciation that individuals experience when interacting with the environment. Although material perception, affordances, and aesthetic appreciation are recognized as essential to fostering sustainability in society, only a few studies have investigated this subject matter systematically and their reciprocal influences. This scarcity is partially due to the challenges offered by the complexity of combining interdisciplinary topics that explore interactions between various disciplines, such as psychophysics, neurophysiology, affective science, aesthetics, and social and environmental sciences. Outlining the main findings across disciplines, this review highlights the pivotal role of material perception in shaping sustainable behaviors. It establishes connections between material perception, affordance, aesthetics, and sustainability, emphasizing the need for interdisciplinary research and integrated approaches in environmental psychology. This integration is essential as it can provide insight into how to foster sustainable and durable changes.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | | | | | - Claudia Scorolli
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| |
Collapse
|
8
|
Balas B, Greene MR. The role of texture summary statistics in material recognition from drawings and photographs. J Vis 2023; 23:3. [PMID: 38064227 PMCID: PMC10709799 DOI: 10.1167/jov.23.14.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/29/2023] [Indexed: 12/18/2023] Open
Abstract
Material depictions in artwork are useful tools for revealing image features that support material categorization. For example, artistic recipes for drawing specific materials make explicit the critical information leading to recognizable material properties (Di Cicco, Wjintjes, & Pont, 2020) and investigating the recognizability of material renderings as a function of their visual features supports conclusions about the vocabulary of material perception. Here, we examined how the recognition of materials from photographs and drawings was affected by the application of the Portilla-Simoncelli texture synthesis model. This manipulation allowed us to examine how categorization may be affected differently across materials and image formats when only summary statistic information about appearance was retained. Further, we compared human performance to the categorization accuracy obtained from a pretrained deep convolutional neural network to determine if observers' performance was reflected in the network. Although we found some similarities between human and network performance for photographic images, the results obtained from drawings differed substantially. Our results demonstrate that texture statistics play a variable role in material categorization across rendering formats and material categories and that the human perception of material drawings is not effectively captured by deep convolutional neural networks trained for object recognition.
Collapse
Affiliation(s)
- Benjamin Balas
- Psychology Department, North Dakota State University, Fargo, ND, USA
| | - Michelle R Greene
- Psychology Department, Barnard College, Columbia University, New York, NY, USA
| |
Collapse
|
9
|
Cheng FL, Horikawa T, Majima K, Tanaka M, Abdelhack M, Aoki SC, Hirano J, Kamitani Y. Reconstructing visual illusory experiences from human brain activity. SCIENCE ADVANCES 2023; 9:eadj3906. [PMID: 37967184 PMCID: PMC10651116 DOI: 10.1126/sciadv.adj3906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/13/2023] [Indexed: 11/17/2023]
Abstract
Visual illusions provide valuable insights into the brain's interpretation of the world given sensory inputs. However, the precise manner in which brain activity translates into illusory experiences remains largely unknown. Here, we leverage a brain decoding technique combined with deep neural network (DNN) representations to reconstruct illusory percepts as images from brain activity. The reconstruction model was trained on natural images to establish a link between brain activity and perceptual features and then tested on two types of illusions: illusory lines and neon color spreading. Reconstructions revealed lines and colors consistent with illusory experiences, which varied across the source visual cortical areas. This framework offers a way to materialize subjective experiences, shedding light on the brain's internal representations of the world.
Collapse
Affiliation(s)
- Fan L. Cheng
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
- ATR Computational Neuroscience Laboratories, Soraku, Kyoto 619-0288, Japan
| | - Tomoyasu Horikawa
- ATR Computational Neuroscience Laboratories, Soraku, Kyoto 619-0288, Japan
| | - Kei Majima
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Misato Tanaka
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Mohamed Abdelhack
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shuntaro C. Aoki
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Jin Hirano
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Yukiyasu Kamitani
- Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
- ATR Computational Neuroscience Laboratories, Soraku, Kyoto 619-0288, Japan
| |
Collapse
|
10
|
Schmid AC, Barla P, Doerschner K. Material category of visual objects computed from specular image structure. Nat Hum Behav 2023:10.1038/s41562-023-01601-0. [PMID: 37386108 PMCID: PMC10365995 DOI: 10.1038/s41562-023-01601-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 04/14/2023] [Indexed: 07/01/2023]
Abstract
Recognizing materials and their properties visually is vital for successful interactions with our environment, from avoiding slippery floors to handling fragile objects. Yet there is no simple mapping of retinal image intensities to physical properties. Here, we investigated what image information drives material perception by collecting human psychophysical judgements about complex glossy objects. Variations in specular image structure-produced either by manipulating reflectance properties or visual features directly-caused categorical shifts in material appearance, suggesting that specular reflections provide diagnostic information about a wide range of material classes. Perceived material category appeared to mediate cues for surface gloss, providing evidence against a purely feedforward view of neural processing. Our results suggest that the image structure that triggers our perception of surface gloss plays a direct role in visual categorization, and that the perception and neural processing of stimulus properties should be studied in the context of recognition, not in isolation.
Collapse
Affiliation(s)
- Alexandra C Schmid
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany.
| | | | - Katja Doerschner
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
11
|
Kushida T, Tahara K, Chiba H, Kagawa Y, Tanaka K, Funatomi T, Mukaigawa Y. Descattering for transmissive inspection in production line using slanted linear image sensors. OPTICS EXPRESS 2022; 30:38016-38026. [PMID: 36258376 DOI: 10.1364/oe.469424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/19/2022] [Indexed: 06/16/2023]
Abstract
We propose a descattering method that can be easily applied to food production lines. The system consists of several sets of linear image sensors and linear light sources slanted at different angles. The images captured by these sensors are partially clear along the direction perpendicular to the sensors. We computationally integrate these images on the frequency domain into a single clear image. The effectiveness of the proposed method is assessed by simulation and real-world experiments. The results show that our method recovers clear images. We demonstrate the applicability of the proposed method to a real production line by a prototype system.
Collapse
|
12
|
Functional recursion of orientation cues in figure-ground separation. Vision Res 2022; 197:108047. [PMID: 35691090 PMCID: PMC9262819 DOI: 10.1016/j.visres.2022.108047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 03/16/2022] [Accepted: 03/23/2022] [Indexed: 11/23/2022]
Abstract
Visual texture is an important cue to figure-ground organization. While processing of texture differences is a prerequisite for the use of this cue to extract figure-ground organization, these stages are distinct processes. One potential indicator of this distinction is the possibility that texture statistics play a different role in the figure vs. in the ground. To determine whether this is the case, we probed figure-ground processing with a family of local image statistics that specified textures that varied in the strength and spatial scale of structure, and the extent to which features are oriented. For image statistics that generated approximately isotropic textures, the threshold for identification of figure-ground structure was determined by the difference in correlation strength in figure vs. ground, independent of whether the correlations were present in figure, ground, or both. However, for image statistics with strong orientation content, thresholds were up to two times higher for correlations in the ground, vs. the figure. This held equally for texture-defined objects with convex or concave boundaries, indicating that these threshold differences are driven by border ownership, not boundary shape. Similar threshold differences were found for presentation times ranging from 125 to 500 ms. These findings identify a qualitative difference in how texture is used for figure-ground analysis, vs. texture discrimination. Additionally, it reveals a functional recursion: texture differences are needed to identify tentative boundaries and consequent scene organization into figure and ground, but then scene organization modifies sensitivity to texture differences according to the figure-ground assignment.
Collapse
|
13
|
Chung YM, Day S, Hu CS. A multi-parameter persistence framework for mathematical morphology. Sci Rep 2022; 12:6427. [PMID: 35440703 PMCID: PMC9019063 DOI: 10.1038/s41598-022-09464-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 03/15/2022] [Indexed: 01/09/2023] Open
Abstract
The field of mathematical morphology offers well-studied techniques for image processing and is applicable for studies ranging from materials science to ecological pattern formation. In this work, we view morphological operations through the lens of persistent homology, a tool at the heart of the field of topological data analysis. We demonstrate that morphological operations naturally form a multiparameter filtration and that persistent homology can then be used to extract information about both topology and geometry in the images as well as to automate methods for optimizing the study and rendering of structure in images. For illustration, we develop an automated approach that utilizes this framework to denoise binary, grayscale, and color images with salt and pepper and larger spatial scale noise. We measure our example unsupervised denoising approach to state-of-the-art supervised, deep learning methods to show that our results are comparable.
Collapse
Affiliation(s)
- Yu-Min Chung
- Eli Lilly and Company, Indianapolis, IN, 46225, USA.
| | - Sarah Day
- Department of Mathematics, William & Mary, Williamsburg, VA, 23185, USA
| | - Chuan-Shen Hu
- Department of Mathematics, National Taiwan Normal University, Taipei City, 106, Taiwan
| |
Collapse
|
14
|
Cheeseman JR, Fleming RW, Schmidt F. Scale ambiguities in material recognition. iScience 2022; 25:103970. [PMID: 35281732 PMCID: PMC8914553 DOI: 10.1016/j.isci.2022.103970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 12/23/2021] [Accepted: 02/18/2022] [Indexed: 11/08/2022] Open
Abstract
Many natural materials have complex, multi-scale structures. Consequently, the inferred identity of a surface can vary with the assumed spatial scale of the scene: a plowed field seen from afar can resemble corduroy seen up close. We investigated this 'material-scale ambiguity' using 87 photographs of diverse materials (e.g., water, sand, stone, metal, and wood). Across two experiments, separate groups of participants (N = 72 adults) provided judgements of the material category depicted in each image, either with or without manipulations of apparent distance (by verbal instructions, or adding objects of familiar size). Our results demonstrate that these manipulations can cause identical images to be assigned to completely different material categories, depending on the assumed scale. Under challenging conditions, therefore, the categorization of materials is susceptible to simple manipulations of apparent distance, revealing a striking example of top-down effects in the interpretation of image features.
Collapse
Affiliation(s)
- Jacob R. Cheeseman
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Hans-Meerwein-Strasse 6, 35032 Marburg, Germany
| | - Filipp Schmidt
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| |
Collapse
|
15
|
Tamura H, Prokott KE, Fleming RW. Distinguishing mirror from glass: A "big data" approach to material perception. J Vis 2022; 22:4. [PMID: 35266961 PMCID: PMC8934559 DOI: 10.1167/jov.22.4.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Distinguishing mirror from glass is a challenging visual inference, because both materials derive their appearance from their surroundings, yet we rarely experience difficulties in telling them apart. Very few studies have investigated how the visual system distinguishes reflections from refractions and to date, there is no image-computable model that emulates human judgments. Here we sought to develop a deep neural network that reproduces the patterns of visual judgments human observers make. To do this, we trained thousands of convolutional neural networks on more than 750,000 simulated mirror and glass objects, and compared their performance with human judgments, as well as alternative classifiers based on "hand-engineered" image features. For randomly chosen images, all classifiers and humans performed with high accuracy, and therefore correlated highly with one another. However, to assess how similar models are to humans, it is not sufficient to compare accuracy or correlation on random images. A good model should also predict the characteristic errors that humans make. We, therefore, painstakingly assembled a diagnostic image set for which humans make systematic errors, allowing us to isolate signatures of human-like performance. A large-scale, systematic search through feedforward neural architectures revealed that relatively shallow (three-layer) networks predicted human judgments better than any other models we tested. This is the first image-computable model that emulates human errors and succeeds in distinguishing mirror from glass, and hints that mid-level visual processing might be particularly important for the task.
Collapse
Affiliation(s)
- Hideki Tamura
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan.,
| | - Konrad Eugen Prokott
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany.,
| |
Collapse
|
16
|
Liao C, Sawayama M, Xiao B. Crystal or jelly? Effect of color on the perception of translucent materials with photographs of real-world objects. J Vis 2022; 22:6. [PMID: 35138326 PMCID: PMC8842421 DOI: 10.1167/jov.22.2.6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 12/17/2021] [Indexed: 11/24/2022] Open
Abstract
Translucent materials are ubiquitous in nature (e.g. teeth, food, and wax), but our understanding of translucency perception is limited. Previous work in translucency perception has mainly used monochromatic rendered images as stimuli, which are restricted by their diversity and realism. Here, we measure translucency perception with photographs of real-world objects. Specifically, we use three behavior tasks: binary classification of "translucent" versus "opaque," semantic attribute rating of perceptual qualities (see-throughness, glossiness, softness, glow, and density), and material categorization. Two different groups of observers finish the three tasks with color or grayscale images. We find that observers' agreements depend on the physical material properties of the objects such that translucent materials generate more interobserver disagreements. Further, there are more disagreements among observers in the grayscale condition in comparison to that in the color condition. We also discover that converting images to grayscale substantially affects the distributions of attribute ratings for some images. Furthermore, ratings of see-throughness, glossiness, and glow could predict individual observers' binary classification of images in both grayscale and color conditions. Last, converting images to grayscale alters the perceived material categories for some images such that observers tend to misjudge images of food as non-food and vice versa. Our result demonstrates that color is informative about material property estimation and recognition. Meanwhile, our analysis shows that mid-level semantic estimation of material attributes might be closely related to high-level material recognition. We also discuss individual differences in our results and highlight the importance of such consideration in material perception.
Collapse
Affiliation(s)
- Chenxi Liao
- Department of Neuroscience, American University, Washington, DC, USA
| | | | - Bei Xiao
- Department of Computer Science, American University, Washington, DC, USA
| |
Collapse
|
17
|
Dudschig C, Kaup B, Leuthold H, Mackenzie IG. Conceptual representation of real-world surface material: Early integration with linguistic-labels indicated in the N400-component. Psychophysiology 2021; 58:e13916. [PMID: 34536024 DOI: 10.1111/psyp.13916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 06/10/2021] [Accepted: 07/02/2021] [Indexed: 11/28/2022]
Abstract
Research in perception in the visual and auditory domains has traditionally focused on investigating highly controlled artificial stimulus material. However, a key feature of our perceptual system is the ease with which the input of a wide set of naturalistic co-occurring information is dealt with. This study investigated whether, during perception of real-world surface material, a conceptual representation is built that has the potential to interact with a linguistic description of the material directly. Short sentences were presented (e.g., This surface is smooth) followed by a matching or mismatching picture of a real-world surface material. The results showed early cross-modal integration effects during material surface perception in an N400-like potential, originating approximately 280 ms after stimulus presentation. Overall, these findings suggest a rather early influence of linguistic information on material perception, suggesting that in line with object representation, real-world materials are represented in the brain in a format that allows interaction with non-visual information.
Collapse
Affiliation(s)
- Carolin Dudschig
- Fachbereich Psychologie, University of Tübingen, Tübingen, Germany
| | - Barbara Kaup
- Fachbereich Psychologie, University of Tübingen, Tübingen, Germany
| | - Hartmut Leuthold
- Fachbereich Psychologie, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
18
|
Van Zuijlen MJP, Lin H, Bala K, Pont SC, Wijntjes MWA. Materials In Paintings (MIP): An interdisciplinary dataset for perception, art history, and computer vision. PLoS One 2021; 16:e0255109. [PMID: 34437544 PMCID: PMC8389402 DOI: 10.1371/journal.pone.0255109] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 07/09/2021] [Indexed: 11/18/2022] Open
Abstract
In this paper, we capture and explore the painterly depictions of materials to enable the study of depiction and perception of materials through the artists' eye. We annotated a dataset of 19k paintings with 200k+ bounding boxes from which polygon segments were automatically extracted. Each bounding box was assigned a coarse material label (e.g., fabric) and half was also assigned a fine-grained label (e.g., velvety, silky). The dataset in its entirety is available for browsing and downloading at materialsinpaintings.tudelft.nl. We demonstrate the cross-disciplinary utility of our dataset by presenting novel findings across human perception, art history and, computer vision. Our experiments include a demonstration of how painters create convincing depictions using a stylized approach. We further provide an analysis of the spatial and probabilistic distributions of materials depicted in paintings, in which we for example show that strong patterns exists for material presence and location. Furthermore, we demonstrate how paintings could be used to build more robust computer vision classifiers by learning a more perceptually relevant feature representation. Additionally, we demonstrate that training classifiers on paintings could be used to uncover hidden perceptual cues by visualizing the features used by the classifiers. We conclude that our dataset of painterly material depictions is a rich source for gaining insights into the depiction and perception of materials across multiple disciplines and hope that the release of this dataset will drive multidisciplinary research.
Collapse
Affiliation(s)
| | - Hubert Lin
- Computer Science Department, Cornell University, Ithaca, New York, United States of America
| | - Kavita Bala
- Computer Science Department, Cornell University, Ithaca, New York, United States of America
| | - Sylvia C Pont
- Perceptual Intelligence Lab, Delft University of Technology, Delft, The Netherlands
| | - Maarten W A Wijntjes
- Perceptual Intelligence Lab, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
19
|
Data Augmentation Using Background Replacement for Automated Sorting of Littered Waste. J Imaging 2021; 7:jimaging7080144. [PMID: 34460780 PMCID: PMC8404942 DOI: 10.3390/jimaging7080144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 07/23/2021] [Accepted: 08/04/2021] [Indexed: 11/17/2022] Open
Abstract
The introduction of sophisticated waste treatment plants is making the process of trash sorting and recycling more and more effective and eco-friendly. Studies on Automated Waste Sorting (AWS) are greatly contributing to making the whole recycling process more efficient. However, a relevant issue, which remains unsolved, is how to deal with the large amount of waste that is littered in the environment instead of being collected properly. In this paper, we introduce BackRep: a method for building waste recognizers that can be used for identifying and sorting littered waste directly where it is found. BackRep consists of a data-augmentation procedure, which expands existing datasets by cropping solid waste in images taken on a uniform (white) background and superimposing it on more realistic backgrounds. For our purpose, realistic backgrounds are those representing places where solid waste is usually littered. To experiment with our data-augmentation procedure, we produced a new dataset in realistic settings. We observed that waste recognizers trained on augmented data actually outperform those trained on existing datasets. Hence, our data-augmentation procedure seems a viable approach to support the development of waste recognizers for urban and wild environments.
Collapse
|
20
|
Harvey JS, Smithson HE. Low level visual features support robust material perception in the judgement of metallicity. Sci Rep 2021; 11:16396. [PMID: 34385496 PMCID: PMC8361131 DOI: 10.1038/s41598-021-95416-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/12/2021] [Indexed: 11/19/2022] Open
Abstract
The human visual system is able to rapidly and accurately infer the material properties of objects and surfaces in the world. Yet an inverse optics approach—estimating the bi-directional reflectance distribution function of a surface, given its geometry and environment, and relating this to the optical properties of materials—is both intractable and computationally unaffordable. Rather, previous studies have found that the visual system may exploit low-level spatio-chromatic statistics as heuristics for material judgment. Here, we present results from psychophysics and modeling that supports the use of image statistics heuristics in the judgement of metallicity—the quality of appearance that suggests an object is made from metal. Using computer graphics, we generated stimuli that varied along two physical dimensions: the smoothness of a metal object, and the evenness of its transparent coating. This allowed for the exploration of low-level image statistics, whilst ensuring that each stimulus was a naturalistic, physically plausible image. A conjoint-measurement task decoupled the contributions of these dimensions to the perception of metallicity. Low-level image features, as represented in the activations of oriented linear filters at different spatial scales, were found to correlate with the dimensions of the stimulus space, and decision-making models using these activations replicated observer performance in perceiving differences in metal smoothness and coating bumpiness, and judging metallicity. Importantly, the performance of these models did not deteriorate when objects were rotated within their simulated scene, with corresponding changes in image properties. We therefore conclude that low-level image features may provide reliable cues for the robust perception of metallicity.
Collapse
Affiliation(s)
- Joshua S Harvey
- Neuroscience Institute, NYU Langone Health, New York, NY, 10016, USA. .,Department of Engineering Science, Oxford University, Oxford, OX1 3PJ, UK. .,Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK
| |
Collapse
|
21
|
A Systematic Survey of ML Datasets for Prime CV Research Areas—Media and Metadata. DATA 2021. [DOI: 10.3390/data6020012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV “library”. Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.
Collapse
|
22
|
Abstract
Painters are masters of depiction and have learned to evoke a clear perception of materials and material attributes in a natural, three-dimensional setting, with complex lighting conditions. Furthermore, painters are not constrained by reality, meaning that they could paint materials without exactly following the laws of nature, while still evoking the perception of materials. Paintings have to our knowledge not been studied on a big scale from a material perception perspective. In this article, we studied the perception of painted materials and their attributes by using human annotations to find instances of 15 materials, such as wood, stone, fabric, etc. Participants made perceptual judgments about 30 unique segments of these materials for 10 material attributes, such as glossiness, roughness, hardness, etc. We found that participants were able to perform this task well while being highly consistent. Participants, however, did not consistently agree with each other, and the measure of consistency depended on the material attribute being perceived. Additionally, we found that material perception appears to function independently of the medium of depiction—the results of our principal component analysis agreed well with findings in former studies for photographs and computer renderings.
Collapse
|
23
|
Balas B, Saville A. Neural sensitivity to natural image statistics changes during middle childhood. Dev Psychobiol 2020; 63:1061-1070. [PMID: 33233018 DOI: 10.1002/dev.22062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 10/29/2020] [Accepted: 11/01/2020] [Indexed: 11/10/2022]
Abstract
Natural images have properties that adults' behavioral and neural responses are sensitive to, but the development of this sensitivity is not clear. Behaviorally, children acquire adult-like sensitivity to natural image statistics during middle childhood (Ellemberg et al., 2012), but infants exhibit sensitivity to deviations of natural image structure (Balas & Woods, 2014). We used event-related potentials (ERPs) to examine sensitivity to natural image statistics during childhood at distinct processing stages (the P1 and N1 components). We presented children (5-10 years old) and adults with natural images varying in positive/negative contrast, and natural/synthetic texture appearance to compare electrophysiological responses to images that did or did not violate natural statistics. We hypothesized that children would acquire sensitivity to these deviations late in middle childhood. Instead, we observed significant responses to unnatural contrast and texture statistics at the N1 in all age groups. At the P1, however, only young children exhibited sensitivity to contrast polarity. The latter effect suggests greater sensitivity earlier in development to some violations of natural image statistics. We discuss these results in terms of changing patterns of invariant texture processing during middle childhood and ongoing refinement of the representations supporting natural image perception.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, USA
| | - Alyson Saville
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, USA
| |
Collapse
|
24
|
Abstract
Many objects that we encounter have typical material qualities: spoons are hard, pillows are soft, and Jell-O dessert is wobbly. Over a lifetime of experiences, strong associations between an object and its typical material properties may be formed, and these associations not only include how glossy, rough, or pink an object is, but also how it behaves under force: we expect knocked over vases to shatter, popped bike tires to deflate, and gooey grilled cheese to hang between two slices of bread when pulled apart. Here we ask how such rich visual priors affect the visual perception of material qualities and present a particularly striking example of expectation violation. In a cue conflict design, we pair computer-rendered familiar objects with surprising material behaviors (a linen curtain shattering, a porcelain teacup wrinkling, etc.) and find that material qualities are not solely estimated from the object's kinematics (i.e., its physical [atypical] motion while shattering, wrinkling, wobbling etc.); rather, material appearance is sometimes “pulled” toward the “native” motion, shape, and optical properties that are associated with this object. Our results, in addition to patterns we find in response time data, suggest that visual priors about materials can set up high-level expectations about complex future states of an object and show how these priors modulate material appearance.
Collapse
Affiliation(s)
| | | | - Katja Doerschner
- Justus Liebig University, Giessen, Germany.,Bilkent University, Ankara, Turkey.,
| |
Collapse
|
25
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
26
|
Balas B, Auen A, Thrash J, Lammers S. Children's use of local and global visual features for material perception. J Vis 2020; 20:10. [PMID: 32097486 PMCID: PMC7343528 DOI: 10.1167/jov.20.2.10] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Adults can rapidly recognize material properties in natural images, and children's performance in material categorization tasks suggests that this ability develops slowly during childhood. In the current study, we further examined the information children use to recognize materials during development by asking how the use of local versus global visual features for material perception changes in middle childhood. We recruited adults and 5- to 10-year-old children for three experiments that required participants to distinguish between shape-matched images of real and artificial food. Accurate performance in this task requires participants to distinguish between a wide range of material properties characteristic of each category, thus testing material perception abilities broadly. In two tasks, we applied distinct methods of image scrambling (block scrambling and diffeomorphic scrambling) to parametrically disrupt global appearance while preserving features in small spatial neighborhoods. In the third task, we used image blurring to parametrically disrupt local feature visibility. Our key question was whether or not participant age affected performance differently when local versus global appearance was disrupted. We found that although image blur led to disproportionately poorer performance in young children, this effect was reduced or absent when diffeomorphic scrambling was used. We interpret this outcome as evidence that the ability to recruit large-scale visual features for material perception may develop slowly during middle childhood.
Collapse
|
27
|
Motoyoshi I. Adaptive comparison matrix: An efficient method for psychological scaling of large stimulus sets. PLoS One 2020; 15:e0233568. [PMID: 32470031 PMCID: PMC7259756 DOI: 10.1371/journal.pone.0233568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 05/07/2020] [Indexed: 11/19/2022] Open
Abstract
Studies on natural and social vision often need to quantify subjective intensity along a particular dimension for a large number of stimuli whose perceptual ordering is unknown. Here, we introduce an easy experimental protocol of comparative judgments that can rank and scale subjective stimulus intensity using a comparatively small number of trials. On each trial in our protocol, the observer initially views M stimuli sampled from a space of N stimuli and selects the stimulus that elicits maximum subjective response along a given dimension (e.g., the most attractive). The selected stimulus is subsequently discarded, the observer then performs a judgment on the remaining stimuli, and the process is iterated until the last stimulus remains and a new trial begins. The method relies on sorting perceived stimulus order in the N x N comparison matrix via logistic regression and sampling the next set of M stimuli such that responses will be collected only for stimulus pairs whose expected response ratio is most informative. Numerical simulations demonstrate that this method can estimate psychological scale with a small number of responses. Psychophysical experiments confirm that the method can quickly estimate the contrast response function for gratings and the perceived glossiness of naturalistic objects. This protocol would be useful for characterizing human judgments along various dimensions, especially those with no physical image correlates such as emotional and social attributes.
Collapse
Affiliation(s)
- Isamu Motoyoshi
- Department of Life Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
28
|
Norman JF, Todd JT, Phillips F. Effects of illumination on the categorization of shiny materials. J Vis 2020; 20:2. [PMID: 32392285 PMCID: PMC7409589 DOI: 10.1167/jov.20.5.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 01/17/2020] [Indexed: 11/24/2022] Open
Abstract
The present research was designed to examine how patterns of illumination influence the perceptual categorization of metal, shiny black, and shiny white materials. The stimuli depicted three possible objects that were illuminated by five possible high-dynamic-range imaging light maps, which varied in their overall distributions of illuminant directions and intensities. The surfaces included a low roughness chrome material, a shiny black material, and a shiny white material with both diffuse and specular components. Observers rated each stimulus by adjusting four sliders to indicate their confidence that the depicted material was metal, shiny black, shiny white, or something else, and these adjustments were constrained so that the sum of all four settings was always 100%. The results revealed that the metal and shiny black categories are easily confused. For example, metal materials with low intensity light maps or a narrow range of illuminant directions are often judged as shiny black, whereas shiny black materials with high intensity light maps or a wide range of illuminant directions are often judged as metal. To discover the visual information on which these judgements are based, we measured several possible image statistics, and we found two that were highly correlated with the observers' confidence ratings in appropriate contexts. We also performed a spherical harmonic analysis on the different light maps to quantitatively predict how they would bias observers' judgments of metal and shiny black surfaces.
Collapse
Affiliation(s)
- J. Farley Norman
- Department of Psychological Sciences, Western Kentucky University, Bowling Green, KY, USA
| | - James T. Todd
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | - Flip Phillips
- Department of Motion Picture Science, Rochester Institute of Technology, Rochester, NY, USA
| |
Collapse
|
29
|
Ingvarsdóttir KÓ, Balkenius C. The Visual Perception of Material Properties Affects Motor Planning in Prehension: An Analysis of Temporal and Spatial Components of Lifting Cups. Front Psychol 2020; 11:215. [PMID: 32132955 PMCID: PMC7040203 DOI: 10.3389/fpsyg.2020.00215] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/30/2020] [Indexed: 11/23/2022] Open
Abstract
The current study examined the role of visually perceived material properties in motor planning, where we analyzed the temporal and spatial components of motor movements during a seated reaching task. We recorded hand movements of 14 participants in three dimensions while they lifted and transported paper cups that differed in weight and glossiness. Kinematic- and spatial analysis revealed speed-accuracy trade-offs to depend on visual material properties of the objects, in which participants reached slower and grabbed closer to the center of mass for stimuli that required to be handled with greater precision. We found grasp-preparation during the first encounters with the cups was not only governed by the anticipated weight of the cups, but also by their visual material properties, namely glossiness. After a series of object lifting, the execution of reaching, the grip position, and the transportation of the cups from one location to another were preeminently guided by the object weight. We also found the planning phase in reaching to be guided by the expectation of hardness and surface gloss. The findings promote the role of general knowledge of material properties in reach-to-grasp movements, in which visual material properties are incorporated in the spatio-temporal components.
Collapse
|
30
|
Affiliation(s)
- Anan Banharnsakun
- Computational Intelligence Research Laboratory (CIRLab), Computer Engineering Department, Faculty of Engineering at SrirachaKasetsart University Sriracha Campus Chonburi Thailand
| |
Collapse
|
31
|
|
32
|
Wendt G, Faul F. Factors Influencing the Detection of Spatially-Varying Surface Gloss. Iperception 2019; 10:2041669519866843. [PMID: 31523415 PMCID: PMC6732868 DOI: 10.1177/2041669519866843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/10/2019] [Indexed: 11/15/2022] Open
Abstract
In this study, we investigate the ability of human observers to detect spatial inhomogeneities in the glossiness of a surface and how the performance in this task depends on several context factors. We used computer-generated stimuli showing a single object in three-dimensional space whose surface was split into two spatial areas with different microscale smoothness. The context factors were the kind of illumination, the object's shape, the availability of motion information, the degree of edge blurring, the spatial proportions between the two areas of different smoothness, and the general smoothness level. Detection thresholds were determined using a two-alternative forced choice (2AFC) task implemented in a double random staircase procedure, where the subjects had to indicate for each stimulus whether or not the surface appears to have a spatially uniform material. We found evidence that two different cues are used for this task: luminance differences and differences in highlight properties between areas of different microscale smoothness. While the visual system seems to be highly sensitive in detecting gloss differences based on luminance contrast information, detection thresholds were considerably higher when the judgment was mainly based on differences in highlight features, such as their size, intensity, and sharpness.
Collapse
Affiliation(s)
- Gunnar Wendt
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| | - Franz Faul
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| |
Collapse
|
33
|
Material surface properties modulate vection strength. Exp Brain Res 2019; 237:2675-2690. [DOI: 10.1007/s00221-019-05620-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 08/05/2019] [Indexed: 01/19/2023]
|
34
|
Di Cicco F, Wijntjes MWA, Pont SC. Understanding gloss perception through the lens of art: Combining perception, image analysis, and painting recipes of 17th century painted grapes. J Vis 2019; 19:7. [PMID: 30897625 DOI: 10.1167/19.3.7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
To understand the key image features that we use to infer the glossiness of materials, we analyzed the pictorial shortcuts used by 17th century painters to imitate the optical phenomenon of specular reflections when depicting grapes. Gloss perception of painted grapes was determined via a rating experiment. We computed the contrast, blurriness, and coverage of the grapes' highlights in the paintings' images, inspired by Marlow and Anderson (2013). The highlights were manually segmented from the images, and next the features contrast, coverage, and blurriness were semiautomatically quantified using self-defined algorithms. Multiple linear regressions of contrast and blurriness resulted in a predictive model that could explain 69% of the variance in gloss perception. No effect was found for coverage. These findings are in agreement with the instructions to render glossiness of grapes contained in a 17th century painting manual (Beurs, 1692/in press), suggesting that painting practice embeds knowledge about key image features that trigger specific material percepts.
Collapse
Affiliation(s)
- Francesca Di Cicco
- Perceptual Intelligence Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, the Netherlands
| | - Maarten W A Wijntjes
- Perceptual Intelligence Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, the Netherlands
| | - Sylvia C Pont
- Perceptual Intelligence Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
35
|
Zhang F, de Ridder H, Pont SC. Asymmetric perceptual confounds between canonical lightings and materials. J Vis 2019; 18:11. [PMID: 30347097 DOI: 10.1167/18.11.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
To better understand the interactions between material perception and light perception, we further developed our material probe MatMix 1.0 into MixIM 1.0, which allows optical mixing of canonical lighting modes. We selected three canonical lighting modes (ambient, focus, and brilliance) and created scenes to represent the three illuminations. Together with four canonical material modes (matte, velvety, specular, glittery), this resulted in 12 basis images (the "bird set"). These images were optically mixed in our probing method. Three experiments were conducted with different groups of observers. In Experiment 1, observers were instructed to manipulate MixIM 1.0 and match optically mixed lighting modes while discounting the materials. In Experiment 2, observers were shown a pair of stimuli and instructed to simultaneously judge whether the materials and lightings were the same or different in a four-category discrimination task. In Experiment 3, observers performed both the matching and discrimination tasks in which only the ambient and focus light were implemented. Overall, the matching and discrimination results were comparable as (a) robust asymmetric perceptual confounds were found and confirmed in both types of tasks, (b) performances were consistent and all above chance levels, and (c) observers had higher sensitivities to our canonical materials than to our canonical lightings. The latter result may be explained in terms of a generic insensitivity for naturally occurring variations in light conditions. Our findings suggest that midlevel image features are more robust across different materials than across different lightings and, thus, more diagnostic for materials than for lightings, causing the asymmetric perceptual confounds.
Collapse
Affiliation(s)
- Fan Zhang
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| | - Huib de Ridder
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| | - Sylvia C Pont
- Perceptual Intelligence Laboratory, Industrial Design Engineering, Delft University of Technology, The Netherlands
| |
Collapse
|
36
|
Visual Performance and Perception as a Target of Saccadic Strategies in Patients With Unilateral Vestibular Loss. Ear Hear 2019; 39:1176-1186. [PMID: 29578887 DOI: 10.1097/aud.0000000000000576] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To evaluate the ability of saccadic strategies developed during vestibular compensation to reduce the effect of an impaired vestibulo-ocular reflex (VOR) on a retinal smear and image motion sensation. DESIGN Twenty patients with unilateral vestibular loss were examined with a video head impulse test before and after vestibular rehabilitation (VR) with the use of gaze stabilization and refixation saccades training. Head and eye velocity functions were processed to infer the retinal eccentricity, and through its correlation with visual acuity (VA), several measurements are proposed to evaluate the influence of VR on saccades behavior and visual performance. To isolate the effect of saccades on the findings and avoid bias because of gain differences, only patients whose VOR gain values remained unchanged after VR were included. RESULTS Improved contribution of covert saccades and reduction of overt saccades latency were measured after VR. We found significant differences when assessing both the interval less than 70% VA (50.25 ms), which is considered the limit of a moderate low vision, and less than 50% VA (39.515 ms), which is the limit for severe low vision. Time to recover a VA of 75% (near normal) was reduced in all the patients (median: 56.472 ms). CONCLUSION Despite the absence of VOR gain improvement, patients with unilateral vestibular loss are able to develop saccadic strategies that allow the shortening of the interval of retinal smear and image motion. The proposed measurements might be of use to evaluate VR outcomes and visually induced impairment.
Collapse
|
37
|
|
38
|
Nagai T, Hosaka Y, Sato T, Kuriki I. Relative contributions of low- and high-luminance components to material perception. J Vis 2018; 18:6. [PMID: 30535255 DOI: 10.1167/18.13.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Besides specular highlights, image pixels that represent clues to recognizing the object material, such as shading between threads of fabrics, often yield relatively lower luminance in the image. Here, we psychophysically examined how lower and higher luminance components contribute to material perception. We created two types of luminance-modulated images-low- and high-luminance-preserved (LLP and HLP) images-and instructed observers to choose which modified image resulted in a material impression closer to the original. LLP images were created by compressing the luminance contrast of the higher half of the histogram in each original photograph and vice versa. The stimuli were photographs of various samples of stone, wood, leather, and fabric. Although the LLP and HLP images were equally chosen, the choice ratios of the HLP images largely differed across the samples and categories and moderately correlated with the luminance statistics of higher-spatial-frequency sub-bands. These results suggest that either the lower- or higher-luminance components play an important role in material perception, depending on the material category. However, the correlation with sub-band image statistics for stone/wood samples was much weaker than for leather/fabric samples, suggesting that more intricate image characteristics may be involved in evaluating the material impressions of the stone/wood samples.
Collapse
Affiliation(s)
- Takehiro Nagai
- Department of Informatics, Yamagata University, Yonezawa, Yamagata, Japan
| | - Yuta Hosaka
- Department of Informatics, Yamagata University, Yonezawa, Yamagata, Japan
| | - Tomoharu Sato
- Department of Intelligent Systems Engineering, Ichinoseki National College of Technology, Ichinoseki, Iwate, Japan
| | - Ichiro Kuriki
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan
| |
Collapse
|
39
|
Neural Mechanisms of Material Perception: Quest on Shitsukan. Neuroscience 2018; 392:329-347. [PMID: 30213767 DOI: 10.1016/j.neuroscience.2018.09.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 08/13/2018] [Accepted: 09/03/2018] [Indexed: 01/11/2023]
Abstract
In recent years, a growing body of research has addressed the nature and mechanism of material perception. Material perception entails perceiving and recognizing a material, surface quality or internal state of an object based on sensory stimuli such as visual, tactile, and/or auditory sensations. This process is ongoing in every aspect of daily life. We can, for example, easily distinguish whether an object is made of wood or metal, or whether a surface is rough or smooth. Judging whether the ground is wet or dry or whether a fish is fresh also involves material perception. Information obtained through material perception can be used to govern actions toward objects and to make decisions about whether to approach an object or avoid it. Because the physical processes leading to sensory signals related to material perception is complicated, it has been difficult to manipulate experimental stimuli in a rigorous manner. However, that situation is now changing thanks to advances in technology and knowledge in related fields. In this article, we will review what is currently known about the neural mechanisms responsible for material perception. We will show that cortical areas in the ventral visual pathway are strongly involved in material perception. Our main focus is on vision, but every sensory modality is involved in material perception. Information obtained through different sensory modalities is closely linked in material perception. Such cross-modal processing is another important feature of material perception, and will also be covered in this review.
Collapse
|
40
|
Seitz RJ, Paloutzian RF, Angel HF. From Believing to Belief: A General Theoretical Model. J Cogn Neurosci 2018; 30:1254-1264. [DOI: 10.1162/jocn_a_01292] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Cognitive neuroscience research has begun to explore the mental processes underlying what a belief and what believing are. Recent evidence suggests that believing involves fundamental brain functions that result in meaningful probabilistic representations, called beliefs. When relatively stable, these beliefs allow for guidance of behavior in individuals and social groups. However, they are also fluid and can be modified by new relevant information, interpersonal contact, social pressure, and situational demands. We present a theoretical model of believing that can account for the formation of both empirically grounded and metaphysical beliefs.
Collapse
|
41
|
Yokoi I, Tachibana A, Minamimoto T, Goda N, Komatsu H. Dependence of behavioral performance on material category in an object-grasping task with monkeys. J Neurophysiol 2018; 120:553-563. [DOI: 10.1152/jn.00748.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys’ behavior in the grasping task, which was measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests that employing real objects is important when studying behaviors related to material perception. NEW & NOTEWORTHY We tested monkeys using an object-grasping task in which monkeys saw and grasped rod-shaped real objects made of various materials. We found that the monkeys’ behavior differed dramatically across the material categories and that the behavioral differences could not be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where the biological significance of materials is an important factor.
Collapse
Affiliation(s)
- Isao Yokoi
- National Institute for Physiological Sciences, Okazaki, Aichi, Japan
- The Graduate University for Advanced Studies, Okazaki, Aichi, Japan
| | - Atsumichi Tachibana
- National Institute for Physiological Sciences, Okazaki, Aichi, Japan
- Dokkyo Medical University School of Medicine, Mibu, Tochigi, Japan
| | - Takafumi Minamimoto
- Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba-shi, Chiba, Japan
| | - Naokazu Goda
- National Institute for Physiological Sciences, Okazaki, Aichi, Japan
- The Graduate University for Advanced Studies, Okazaki, Aichi, Japan
| | - Hidehiko Komatsu
- National Institute for Physiological Sciences, Okazaki, Aichi, Japan
- Brain Science Institute, Tamagawa University, Machida, Tokyo, Japan
| |
Collapse
|
42
|
Abstract
Although adults' ability to recognize materials from complex natural images has been well characterized, we still know very little about the development of material perception. When do children exhibit adult-like abilities to categorize materials? What visual features do they use to do so as a function of age and material category? In the present study, we attempted to address both of these issues in two experiments that we administered to school-age children (5–10 years old) and adults. In both tasks, we asked our participants to categorize natural materials (metal, stone, water, and wood) using original images of these materials as well as synthetic images made with the Portilla–Simoncelli algorithm. By including synthetic images in our stimulus set, we were able to assess both how material categorization develops during childhood and how visual summary statistics are recruited for material perception across age groups. We observed that when asked to provide category labels for individual images (Experiment 1), young children were disproportionately bad at categorizing some materials after they were synthesized, suggesting material-specific changes in information use over the course of development. However, when asked to match real and synthetic images according to material category without labeling (Experiment 2), these effects were weakened. We conclude that while children have adult-like abilities to encode and compare images based on summary statistics, the mapping between summary statistics and category labels undergoes prolonged development during childhood.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology and Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, USA
| |
Collapse
|
43
|
Abstract
Under typical viewing conditions, human observers effortlessly recognize materials and infer their physical, functional, and multisensory properties at a glance. Without touching materials, we can usually tell whether they would feel hard or soft, rough or smooth, wet or dry. We have vivid visual intuitions about how deformable materials like liquids or textiles respond to external forces and how surfaces like chrome, wax, or leather change appearance when formed into different shapes or viewed under different lighting. These achievements are impressive because the retinal image results from complex optical interactions between lighting, shape, and material, which cannot easily be disentangled. Here I argue that because of the diversity, mutability, and complexity of materials, they pose enormous challenges to vision science: What is material appearance, and how do we measure it? How are material properties estimated and represented? Resolving these questions causes us to scrutinize the basic assumptions of mid-level vision.
Collapse
Affiliation(s)
- Roland W Fleming
- Department of Experimental Psychology, University of Giessen, 35394 Giessen, Germany;
| |
Collapse
|
44
|
Vrancken C, Longhurst PJ, Wagland ST. Critical review of real-time methods for solid waste characterisation: Informing material recovery and fuel production. WASTE MANAGEMENT (NEW YORK, N.Y.) 2017; 61:40-57. [PMID: 28139367 DOI: 10.1016/j.wasman.2017.01.019] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 12/16/2016] [Accepted: 01/15/2017] [Indexed: 06/06/2023]
Abstract
Waste management processes generally represent a significant loss of material, energy and economic resources, so legislation and financial incentives are being implemented to improve the recovery of these valuable resources whilst reducing contamination levels. Material recovery and waste derived fuels are potentially valuable options being pursued by industry, using mechanical and biological processes incorporating sensor and sorting technologies developed and optimised for recycling plants. In its current state, waste management presents similarities to other industries that could improve their efficiencies using process analytical technology tools. Existing sensor technologies could be used to measure critical waste characteristics, providing data required by existing legislation, potentially aiding waste treatment processes and assisting stakeholders in decision making. Optical technologies offer the most flexible solution to gather real-time information applicable to each of the waste mechanical and biological treatment processes used by industry. In particular, combinations of optical sensors in the visible and the near-infrared range from 800nm to 2500nm of the spectrum, and different mathematical techniques, are able to provide material information and fuel properties with typical performance levels between 80% and 90%. These sensors not only could be used to aid waste processes, but to provide most waste quality indicators required by existing legislation, whilst offering better tools to the stakeholders.
Collapse
Affiliation(s)
- C Vrancken
- School of Water, Energy and Environment, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK
| | - P J Longhurst
- School of Water, Energy and Environment, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK
| | - S T Wagland
- School of Water, Energy and Environment, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK.
| |
Collapse
|
45
|
Action observation: the less-explored part of higher-order vision. Sci Rep 2016; 6:36742. [PMID: 27857160 PMCID: PMC5114682 DOI: 10.1038/srep36742] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Accepted: 10/20/2016] [Indexed: 11/25/2022] Open
Abstract
Little is presently known about action observation, an important perceptual component of high-level vision. To investigate this aspect of perception, we introduce a two-alternative forced-choice task for observed manipulative actions while varying duration or signal strength by noise injection. We show that accuracy and reaction time in this task can be modeled by a diffusion process for different pairs of action exemplars. Furthermore, discrimination of observed actions is largely viewpoint-independent, cannot be reduced to judgments about the basic components of action: shape and local motion, and requires a minimum duration of about 150–200 ms. These results confirm that action observation is a distinct high-level aspect of visual perception based on temporal integration of visual input generated by moving body parts. This temporal integration distinguishes it from object or scene perception, which require only very brief presentations and are viewpoint-dependent. The applicability of a diffusion model suggests that these aspects of high-level vision differ mainly at the level of the sensory neurons feeding the decision processes.
Collapse
|
46
|
Abstract
Despite the long scholarly discourse in Western theology and philosophy on religion, spirituality, and faith, explanations of what a belief and what believing is are still lacking. Recently, cognitive neuroscience research addressed the human capacity of believing. We present evidence suggesting that believing is a human brain function which results in probabilistic representations with attributes of personal meaning and value and thereby guides individuals’ behavior. We propose that the same mental processes operating on narratives and rituals constitute belief systems in individuals and social groups. Our theoretical model of believing is suited to account for secular and non-secular belief formation.
Collapse
Affiliation(s)
- Rüdiger J Seitz
- Heinrich-Heine-University Düsseldorf, LVR-Klinikum Düsseldorf, Düsseldorf, Germany
| | | | - Hans-Ferdinand Angel
- Institute of Catechetic and Pedagogic of Religion, Karl Franzens University Graz, Graz, Austria
| |
Collapse
|
47
|
Seitz RJ, Paloutzian RF, Angel HF. Processes of believing: Where do they come from? What are they good for? F1000Res 2016; 5:2573. [PMID: 28105309 PMCID: PMC5200943 DOI: 10.12688/f1000research.9773.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2017] [Indexed: 08/22/2023] Open
Abstract
Despite the long scholarly discourse in Western theology and philosophy on religion, spirituality, and faith, explanations of what a belief and what believing is are still lacking. Recently, cognitive neuroscience research addressed the human capacity of believing. We present evidence suggesting that believing is a human brain function which results in probabilistic representations with attributes of personal meaning and value and thereby guides individuals' behavior. We propose that the same mental processes operating on narratives and rituals constitute belief systems in individuals and social groups. Our theoretical model of believing is suited to account for secular and non-secular belief formation.
Collapse
Affiliation(s)
- Rüdiger J. Seitz
- Heinrich-Heine-University Düsseldorf, LVR-Klinikum Düsseldorf, Düsseldorf, Germany
| | | | - Hans-Ferdinand Angel
- Institute of Catechetic and Pedagogic of Religion, Karl Franzens University Graz, Graz, Austria
| |
Collapse
|
48
|
Pham TD. Quantifying visual perception of texture with fuzzy metric entropy. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2016. [DOI: 10.3233/jifs-169038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
49
|
|
50
|
Hiramatsu C, Fujita K. Visual categorization of surface qualities of materials by capuchin monkeys and humans. Vision Res 2015; 115:71-82. [DOI: 10.1016/j.visres.2015.07.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 07/28/2015] [Accepted: 07/30/2015] [Indexed: 11/30/2022]
|