1
|
Liao C, Sawayama M, Xiao B. Probing the link between vision and language in material perception using psychophysics and unsupervised learning. PLoS Comput Biol 2024; 20:e1012481. [PMID: 39361707 PMCID: PMC11478833 DOI: 10.1371/journal.pcbi.1012481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 10/15/2024] [Accepted: 09/11/2024] [Indexed: 10/05/2024] Open
Abstract
We can visually discriminate and recognize a wide range of materials. Meanwhile, we use language to describe what we see and communicate relevant information about the materials. Here, we investigate the relationship between visual judgment and language expression to understand how visual features relate to semantic representations in human cognition. We use deep generative models to generate images of realistic materials. Interpolating between the generative models enables us to systematically create material appearances in both well-defined and ambiguous categories. Using these stimuli, we compared the representations of materials from two behavioral tasks: visual material similarity judgments and free-form verbal descriptions. Our findings reveal a moderate but significant correlation between vision and language on a categorical level. However, analyzing the representations with an unsupervised alignment method, we discover structural differences that arise at the image-to-image level, especially among ambiguous materials morphed between known categories. Moreover, visual judgments exhibit more individual differences compared to verbal descriptions. Our results show that while verbal descriptions capture material qualities on the coarse level, they may not fully convey the visual nuances of material appearances. Analyzing the image representation of materials obtained from various pre-trained deep neural networks, we find that similarity structures in human visual judgments align more closely with those of the vision-language models than purely vision-based models. Our work illustrates the need to consider the vision-language relationship in building a comprehensive model for material perception. Moreover, we propose a novel framework for evaluating the alignment and misalignment between representations from different modalities, leveraging information from human behaviors and computational models.
Collapse
Affiliation(s)
- Chenxi Liao
- American University, Department of Neuroscience, Washington DC, United States of America
| | - Masataka Sawayama
- The University of Tokyo, Graduate School of Information Science and Technology, Tokyo, Japan
| | - Bei Xiao
- American University, Department of Computer Science, Washington DC, United States of America
| |
Collapse
|
2
|
Zhou J, Duong LR, Simoncelli EP. A unified framework for perceived magnitude and discriminability of sensory stimuli. Proc Natl Acad Sci U S A 2024; 121:e2312293121. [PMID: 38857385 PMCID: PMC11194506 DOI: 10.1073/pnas.2312293121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 04/25/2024] [Indexed: 06/12/2024] Open
Abstract
The perception of sensory attributes is often quantified through measurements of sensitivity (the ability to detect small stimulus changes), as well as through direct judgments of appearance or intensity. Despite their ubiquity, the relationship between these two measurements remains controversial and unresolved. Here, we propose a framework in which they arise from different aspects of a common representation. Specifically, we assume that judgments of stimulus intensity (e.g., as measured through rating scales) reflect the mean value of an internal representation, and sensitivity reflects a combination of mean value and noise properties, as quantified by the statistical measure of Fisher information. Unique identification of these internal representation properties can be achieved by combining measurements of sensitivity and judgments of intensity. As a central example, we show that Weber's law of perceptual sensitivity can coexist with Stevens' power-law scaling of intensity ratings (for all exponents), when the noise amplitude increases in proportion to the representational mean. We then extend this result beyond the Weber's law range by incorporating a more general and physiology-inspired form of noise and show that the combination of noise properties and sensitivity measurements accurately predicts intensity ratings across a variety of sensory modalities and attributes. Our framework unifies two primary perceptual measurements-thresholds for sensitivity and rating scales for intensity-and provides a neural interpretation for the underlying representation.
Collapse
Affiliation(s)
- Jingyang Zhou
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY10010
- Center for Neural Science, New York University, New York, NY10003
| | - Lyndon R. Duong
- Center for Neural Science, New York University, New York, NY10003
| | - Eero P. Simoncelli
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY10010
- Center for Neural Science, New York University, New York, NY10003
- Courant Institute of Mathematical Sciences, New York University, New York, NY10003
| |
Collapse
|
3
|
Tamura H, Nakauchi S, Minami T. Glossiness perception and its pupillary response. Vision Res 2024; 219:108393. [PMID: 38579405 DOI: 10.1016/j.visres.2024.108393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 03/18/2024] [Accepted: 03/20/2024] [Indexed: 04/07/2024]
Abstract
Recent studies have revealed that pupillary response changes depend on perceptual factors such as subjective brightness caused by optical illusions and luminance. However, the manner in which the perceptual factor that is derived from the glossiness perception of object surfaces affects the pupillary response remains unclear. We investigated the relationship between the glossiness perception and pupillary response through a glossiness rating experiment that included recording the pupil diameter. We prepared general object images (original) and randomized images (shuffled) that comprised the same images with randomized small square regions as stimuli. The image features were controlled by matching the luminance histogram. The observers were asked to rate the perceived glossiness of the stimuli presented for 3,000 ms and the changes in their pupil diameters were recorded. Images with higher glossiness ratings constricted the pupil size more than those with lower glossiness ratings at the peak constriction of the pupillary responses during the stimulus duration. The linear mixed-effects model demonstrated that the glossiness rating, image category (original/shuffled), variance of the luminance histogram, and stimulus area were most effective in predicting the pupillary responses. These results suggest that the illusory brightness obtained by the image regions of high-glossiness objects, such as specular highlights, induce pupil constriction.
Collapse
Affiliation(s)
- Hideki Tamura
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan.
| | - Shigeki Nakauchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | - Tetsuto Minami
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| |
Collapse
|
4
|
Liao C, Sawayama M, Xiao B. Probing the Link Between Vision and Language in Material Perception Using Psychophysics and Unsupervised Learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.25.577219. [PMID: 38328102 PMCID: PMC10849714 DOI: 10.1101/2024.01.25.577219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
We can visually discriminate and recognize a wide range of materials. Meanwhile, we use language to express our subjective understanding of visual input and communicate relevant information about the materials. Here, we investigate the relationship between visual judgment and language expression in material perception to understand how visual features relate to semantic representations. We use deep generative networks to construct an expandable image space to systematically create materials of well-defined and ambiguous categories. From such a space, we sampled diverse stimuli and compared the representations of materials from two behavioral tasks: visual material similarity judgments and free-form verbal descriptions. Our findings reveal a moderate but significant correlation between vision and language on a categorical level. However, analyzing the representations with an unsupervised alignment method, we discover structural differences that arise at the image-to-image level, especially among materials morphed between known categories. Moreover, visual judgments exhibit more individual differences compared to verbal descriptions. Our results show that while verbal descriptions capture material qualities on the coarse level, they may not fully convey the visual features that characterize the material's optical properties. Analyzing the image representation of materials obtained from various pre-trained data-rich deep neural networks, we find that human visual judgments' similarity structures align more closely with those of the text-guided visual-semantic model than purely vision-based models. Our findings suggest that while semantic representations facilitate material categorization, non-semantic visual features also play a significant role in discriminating materials at a finer level. This work illustrates the need to consider the vision-language relationship in building a comprehensive model for material perception. Moreover, we propose a novel framework for quantitatively evaluating the alignment and misalignment between representations from different modalities, leveraging information from human behaviors and computational models.
Collapse
Affiliation(s)
- Chenxi Liao
- American University, Department of Neuroscience, Washington DC, 20016, USA
| | - Masataka Sawayama
- The University of Tokyo, Graduate School of Information Science and Technology, Tokyo, 113-0033, Japan
| | - Bei Xiao
- American University, Department of Computer Science, Washington, DC, 20016, USA
| |
Collapse
|
5
|
Tsuda H, Kawabata H. materialmodifier: An R package of photo editing effects for material perception research. Behav Res Methods 2024; 56:2657-2674. [PMID: 37162649 PMCID: PMC10991072 DOI: 10.3758/s13428-023-02116-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/27/2023] [Indexed: 05/11/2023]
Abstract
In this paper, we introduce an R package that performs automated photo editing effects. Specifically, it is an R implementation of an image-processing algorithm proposed by Boyadzhiev et al. (2015). The software allows the user to manipulate the appearance of objects in photographs, such as emphasizing facial blemishes and wrinkles, smoothing the skin, or enhancing the gloss of fruit. It provides a reproducible method to quantitatively control specific surface properties of objects (e.g., gloss and roughness), which is useful for researchers interested in topics related to material perception, from basic mechanisms of perception to the aesthetic evaluation of faces and objects. We describe the functionality, usage, and algorithm of the method, report on the findings of a behavioral evaluation experiment, and discuss its usefulness and limitations for psychological research. The package can be installed via CRAN, and documentation and source code are available at https://github.com/tsuda16k/materialmodifier .
Collapse
Affiliation(s)
- Hiroyuki Tsuda
- Faculty of Psychology, Doshisha University, Kyoto, Japan.
| | - Hideaki Kawabata
- Department of Psychology, Faculty of Letters, Keio University, Tokyo, Japan.
| |
Collapse
|
6
|
Strappini F, Fagioli S, Mastandrea S, Scorolli C. Sustainable materials: a linking bridge between material perception, affordance, and aesthetics. Front Psychol 2024; 14:1307467. [PMID: 38259544 PMCID: PMC10800687 DOI: 10.3389/fpsyg.2023.1307467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
The perception of material properties, which refers to the way in which individuals perceive and interpret materials through their sensory experiences, plays a crucial role in our interaction with the environment. Affordance, on the other hand, refers to the potential actions and uses that materials offer to users. In turn, the perception of the affordances is modulated by the aesthetic appreciation that individuals experience when interacting with the environment. Although material perception, affordances, and aesthetic appreciation are recognized as essential to fostering sustainability in society, only a few studies have investigated this subject matter systematically and their reciprocal influences. This scarcity is partially due to the challenges offered by the complexity of combining interdisciplinary topics that explore interactions between various disciplines, such as psychophysics, neurophysiology, affective science, aesthetics, and social and environmental sciences. Outlining the main findings across disciplines, this review highlights the pivotal role of material perception in shaping sustainable behaviors. It establishes connections between material perception, affordance, aesthetics, and sustainability, emphasizing the need for interdisciplinary research and integrated approaches in environmental psychology. This integration is essential as it can provide insight into how to foster sustainable and durable changes.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | | | | | - Claudia Scorolli
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| |
Collapse
|
7
|
Klein LK, Maiello G, Stubbs K, Proklova D, Chen J, Paulun VC, Culham JC, Fleming RW. Distinct Neural Components of Visually Guided Grasping during Planning and Execution. J Neurosci 2023; 43:8504-8514. [PMID: 37848285 PMCID: PMC10711727 DOI: 10.1523/jneurosci.0335-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/18/2023] [Accepted: 09/06/2023] [Indexed: 10/19/2023] Open
Abstract
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
| | - Guido Maiello
- School of Psychology, University of Southampton, Southampton SO17 1PS, United Kingdom
| | - Kevin Stubbs
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Daria Proklova
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Guangzhou 510631, China
| | - Vivian C Paulun
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Giessen, Germany, 35390
| |
Collapse
|
8
|
Fleming RW. Visual perception: Contours that crack the ambiguity conundrum. Curr Biol 2023; 33:R760-R762. [PMID: 37490860 DOI: 10.1016/j.cub.2023.06.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
A new study shows how the brain exploits the parts of images where surfaces curve out of view to recover both the three-dimensional shape and material properties of objects. This sheds light on a long-standing 'chicken-and-egg' problem in perception research.
Collapse
Affiliation(s)
- Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394 Giessen, Germany, and Center for Mind, Brain and Behavior, Universities of Marburg, Giessen and Darmstadt, Germany.
| |
Collapse
|
9
|
Schmid AC, Barla P, Doerschner K. Material category of visual objects computed from specular image structure. Nat Hum Behav 2023:10.1038/s41562-023-01601-0. [PMID: 37386108 PMCID: PMC10365995 DOI: 10.1038/s41562-023-01601-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 04/14/2023] [Indexed: 07/01/2023]
Abstract
Recognizing materials and their properties visually is vital for successful interactions with our environment, from avoiding slippery floors to handling fragile objects. Yet there is no simple mapping of retinal image intensities to physical properties. Here, we investigated what image information drives material perception by collecting human psychophysical judgements about complex glossy objects. Variations in specular image structure-produced either by manipulating reflectance properties or visual features directly-caused categorical shifts in material appearance, suggesting that specular reflections provide diagnostic information about a wide range of material classes. Perceived material category appeared to mediate cues for surface gloss, providing evidence against a purely feedforward view of neural processing. Our results suggest that the image structure that triggers our perception of surface gloss plays a direct role in visual categorization, and that the perception and neural processing of stimulus properties should be studied in the context of recognition, not in isolation.
Collapse
Affiliation(s)
- Alexandra C Schmid
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany.
| | | | - Katja Doerschner
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
10
|
Pistolas E, Wagemans J. Crossmodal correspondences and interactions between texture and taste perception. Iperception 2023; 14:20416695231163473. [PMID: 37020456 PMCID: PMC10069003 DOI: 10.1177/20416695231163473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 02/26/2023] [Indexed: 04/03/2023] Open
Abstract
In recent years, awareness of the influence of different modalities on taste perception has grown. Although previous research in crossmodal taste perception has touched upon the bipolar distinction between softness/smoothness and roughness/angularity, ambiguity largely remains surrounding other crossmodal correspondences between taste and other specific textures we regularly use to describe our food, such as crispy or crunchy. Sweetness has previously been found to be associated with soft textures but our current understanding does not exceed the basic distinction made between roughness and smoothness. Specifically, the role of texture in taste perception remains relatively understudied. The current study consisted of two parts. First, because of the lack of clarity concerning specific associations between basic tastes and textures, an online questionnaire served to assess whether consistent associations between texture words and taste words exist and how these arise intuitively. The second part consisted of a taste experiment with factorial combinations of four tastes and four textures. The results of the questionnaire study showed that consistent associations are made between soft and sweet and between crispy and salty at the conceptual level. The results of the taste experiment largely showed evidence in support of these findings at the perceptual level. In addition, the experiment allowed for a closer look into the complexity found regarding the association between sour and crunchy, and bitter and sandy.
Collapse
Affiliation(s)
- Eleftheria Pistolas
- Laboratory of Experimental Psychology, Department
of Brain and Cognition, University of Leuven, Belgium
- Eleftheria Pistolas, University of Leuven
(KU Leuven), Department of Brain & Cognition, Laboratory of Experimental
Psychology, Tiensestraat 102 - box 3711, BE-3000 Leuven, Belgium.
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department
of Brain and Cognition, University of Leuven, Belgium
| |
Collapse
|
11
|
Liao C, Sawayama M, Xiao B. Unsupervised learning reveals interpretable latent representations for translucency perception. PLoS Comput Biol 2023; 19:e1010878. [PMID: 36753520 PMCID: PMC9942964 DOI: 10.1371/journal.pcbi.1010878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/21/2023] [Accepted: 01/18/2023] [Indexed: 02/09/2023] Open
Abstract
Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for translucent materials, whose appearance strongly depends on lighting, geometry, and viewpoint. Despite this, humans can still distinguish between different materials, and it remains unsolved how to systematically discover visual features pertinent to material inference from natural images. Here, we develop an unsupervised style-based image generation model to identify perceptually relevant dimensions for translucent material appearances from photographs. We find our model, with its layer-wise latent representation, can synthesize images of diverse and realistic materials. Importantly, without supervision, human-understandable scene attributes, including the object's shape, material, and body color, spontaneously emerge in the model's layer-wise latent space in a scale-specific manner. By embedding an image into the learned latent space, we can manipulate specific layers' latent code to modify the appearance of the object in the image. Specifically, we find that manipulation on the early-layers (coarse spatial scale) transforms the object's shape, while manipulation on the later-layers (fine spatial scale) modifies its body color. The middle-layers of the latent space selectively encode translucency features and manipulation of such layers coherently modifies the translucency appearance, without changing the object's shape or body color. Moreover, we find the middle-layers of the latent space can successfully predict human translucency ratings, suggesting that translucent impressions are established in mid-to-low spatial scale features. This layer-wise latent representation allows us to systematically discover perceptually relevant image features for human translucency perception. Together, our findings reveal that learning the scale-specific statistical structure of natural images might be crucial for humans to efficiently represent material properties across contexts.
Collapse
Affiliation(s)
- Chenxi Liao
- Department of Neuroscience, American University, Washington, D.C., District of Columbia, United States of America
| | - Masataka Sawayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Bei Xiao
- Department of Computer Science, American University, Washington, D.C., District of Columbia, United States of America
| |
Collapse
|
12
|
Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F. New Approaches to 3D Vision. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210443. [PMID: 36511413 PMCID: PMC9745878 DOI: 10.1098/rstb.2021.0443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 12/15/2022] Open
Abstract
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Michael J. Morgan
- Department of Optometry and Visual Sciences, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne & Wear NE2 4HH, UK
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| | | | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912-9067, USA
| |
Collapse
|
13
|
Künstle DE, von Luxburg U, Wichmann FA. Estimating the perceived dimension of psychophysical stimuli using triplet accuracy and hypothesis testing. J Vis 2022; 22:5. [PMID: 36469015 PMCID: PMC9730733 DOI: 10.1167/jov.22.13.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Vision researchers are interested in mapping complex physical stimuli to perceptual dimensions. Such a mapping can be constructed using multidimensional psychophysical scaling or ordinal embedding methods. Both methods infer coordinates that agree as much as possible with the observer's judgments so that perceived similarity corresponds with distance in the inferred space. However, a fundamental problem of all methods that construct scalings in multiple dimensions is that the inferred representation can only reflect perception if the scale has the correct dimension. Here we propose a statistical procedure to overcome this limitation. The critical elements of our procedure are i) measuring the scale's quality by the number of correctly predicted triplets and ii) performing a statistical test to assess if adding another dimension to the scale improves triplet accuracy significantly. We validate our procedure through extensive simulations. In addition, we study the properties and limitations of our procedure using "real" data from various behavioral datasets from psychophysical experiments. We conclude that our procedure can reliably identify (a lower bound on) the number of perceptual dimensions for a given dataset.
Collapse
|
14
|
Shi W, Dorsey J, Rushmeier H. Learning-Based Inverse Bi-Scale Material Fitting From Tabular BRDFs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1810-1823. [PMID: 32960764 DOI: 10.1109/tvcg.2020.3026021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Relating small-scale structures to large-scale appearance is a key element in material appearance design. Bi-scale material design requires finding small-scale structures - meso-scale geometry and micro-scale BRDFs - that produce a desired large-scale appearance expressed as a macro-scale BRDF. The adjustment of small-scale geometry and reflectances to achieve a desired appearance can become a tedious trial-and-error process. We present a learning-based solution to fit a target macro-scale BRDF with a combination of a meso-scale geometry and micro-scale BRDF. We confront challenges in representation at both scales. At the large scale we need macro-scale BRDFs that are both compact and expressive. At the small scale we need diverse combinations of geometric patterns and potentially spatially varying micro-BRDFs. For large-scale macro-BRDFs, we propose a novel 2D subset of a tabular BRDF representation that well preserves important appearance features for learning. For small-scale details, we represent geometries and BRDFs in different categories with different physical parameters to define multiple independent continuous search spaces. To build the mapping between large-scale macro-BRDFs and small-scale details, we propose an end-to-end model that takes the subset BRDF as input and performs classification and parameter estimation on small-scale details to find an accurate reconstruction. Compared with other fitting methods, our learning-based solution provides higher reconstruction accuracy and covers a wider gamut of appearance.
Collapse
|
15
|
Ujitoko Y, Kawabe T. Perceptual judgments for the softness of materials under indentation. Sci Rep 2022; 12:1761. [PMID: 35110650 PMCID: PMC8810927 DOI: 10.1038/s41598-022-05864-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/19/2022] [Indexed: 12/24/2022] Open
Abstract
Humans can judge the softness of elastic materials through only visual cues. However, factors contributing to the judgment of visual softness are not yet fully understood. We conducted a psychophysical experiment to determine which factors and motion features contribute to the apparent softness of materials. Observers watched video clips in which materials were indented from the top surface to a certain depth, and reported the apparent softness of the materials. The depth and speed of indentation were systematically manipulated. As physical characteristics of materials, compliance was also controlled. It was found that higher indentation speeds resulted in larger softness rating scores and the variation with the indentation speed was successfully explained by the image motion speed. The indentation depth had a powerful effect on the softness rating scores and the variation with the indentation depth was consistently explained by motion features related to overall deformation. Higher material compliance resulted in higher softness rating scores and these variation with the material compliance can be explained also by overall deformation. We conclude that the brain makes visual judgments about the softness of materials under indentation on the basis of the motion speed and deformation magnitude.
Collapse
Affiliation(s)
- Yusuke Ujitoko
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, 243-0198, Japan.
| | - Takahiro Kawabe
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, 243-0198, Japan
| |
Collapse
|
16
|
Sawayama M, Dobashi Y, Okabe M, Hosokawa K, Koumura T, Saarela TP, Olkkonen M, Nishida S. Visual discrimination of optical material properties: A large-scale study. J Vis 2022; 22:17. [PMID: 35195670 PMCID: PMC8883156 DOI: 10.1167/jov.22.2.17] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 01/04/2022] [Indexed: 11/24/2022] Open
Abstract
Complex visual processing involved in perceiving the object materials can be better elucidated by taking a variety of research approaches. Sharing stimulus and response data is an effective strategy to make the results of different studies directly comparable and can assist researchers with different backgrounds to jump into the field. Here, we constructed a database containing several sets of material images annotated with visual discrimination performance. We created the material images using physically based computer graphics techniques and conducted psychophysical experiments with them in both laboratory and crowdsourcing settings. The observer's task was to discriminate materials on one of six dimensions (gloss contrast, gloss distinctness of image, translucent vs. opaque, metal vs. plastic, metal vs. glass, and glossy vs. painted). The illumination consistency and object geometry were also varied. We used a nonverbal procedure (an oddity task) applicable for diverse use cases, such as cross-cultural, cross-species, clinical, or developmental studies. Results showed that the material discrimination depended on the illuminations and geometries and that the ability to discriminate the spatial consistency of specular highlights in glossiness perception showed larger individual differences than in other tasks. In addition, analysis of visual features showed that the parameters of higher order color texture statistics can partially, but not completely, explain task performance. The results obtained through crowdsourcing were highly correlated with those obtained in the laboratory, suggesting that our database can be used even when the experimental conditions are not strictly controlled in the laboratory. Several projects using our dataset are underway.
Collapse
Affiliation(s)
- Masataka Sawayama
- Inria, Bordeaux, France
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Yoshinori Dobashi
- Information Media Environment Laboratory, Hokkaido University, Hokkaido, Japan
- Prometech CG Research, Tokyo, Japan
| | - Makoto Okabe
- Department of Mathematical and Systems Engineering, Graduate School of Engineering, Shizuoka University, Shizuoka, Japan
| | - Kenchi Hosokawa
- Advanced Comprehensive Research Organization, Teikyo University, Tokyo, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Takuya Koumura
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Toni P Saarela
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Maria Olkkonen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Shin'ya Nishida
- Cognitive Informatics Lab, Graduate School of informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| |
Collapse
|
17
|
Hyperspectral characterization of natural lighting environments. PROGRESS IN BRAIN RESEARCH 2022; 273:37-48. [DOI: 10.1016/bs.pbr.2022.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
18
|
Wandell BA, Brainard DH, Cottaris NP. Visual encoding: Principles and software. PROGRESS IN BRAIN RESEARCH 2022; 273:199-229. [DOI: 10.1016/bs.pbr.2022.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
19
|
Morimoto T, Numata A, Fukuda K, Uchikawa K. Luminosity thresholds of colored surfaces are determined by their upper-limit luminances empirically internalized in the visual system. J Vis 2021; 21:3. [PMID: 34874444 PMCID: PMC8662570 DOI: 10.1167/jov.21.13.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We typically have a fairly good idea whether a given object is self-luminous or illuminated, but it is not fully understood how we make this judgment. This study aimed to identify determinants of the luminosity threshold, a luminance level at which a surface begins to appear self-luminous. We specifically tested a hypothesis that our visual system knows the maximum luminance level that a surface can reach under the physical constraint that a surface cannot reflect more light than any incident light and applies this prior to determine the luminosity thresholds. Observers were presented with a 2-degree circular test field surrounded by numerous overlapping colored circles and luminosity thresholds were measured as a function of (i) the chromaticity of the test field, (ii) the shape of surrounding color distribution, and (iii) the color of the illuminant of the surrounding colors. We found that the luminosity thresholds peaked around the chromaticity of test illuminants and decreased as the purity of the test chromaticity increased. However, the loci of luminosity thresholds across chromaticities were nearly invariant to the shape of the surrounding color distribution and generally resembled the loci drawn from theoretical upper-limit luminances and upper-limit luminance boundaries of real objects. These trends were particularly evident for illuminants on the black-body locus and did not hold well under atypical illuminants, such as magenta or green. These results support the idea that our visual system empirically internalizes the gamut of surface colors under natural illuminants and a given object appears self-luminous when its luminance exceeds this internalized upper-limit luminance.
Collapse
Affiliation(s)
- Takuma Morimoto
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Department of General Psychology, Justus-Liebig-Universität Gießen, Gießen, Germany.,
| | - Ai Numata
- Department of Information Processing, Tokyo Institute of Technology, Yokohama, Japan.,
| | - Kazuho Fukuda
- Department of Information Design, Kogakuin University, Tokyo, Japan.,
| | - Keiji Uchikawa
- Human Media Research Center, Kanagawa Institute of Technology, Atsugi, Japan.,
| |
Collapse
|
20
|
Lauer T, Schmidt F, Võ MLH. The role of contextual materials in object recognition. Sci Rep 2021; 11:21988. [PMID: 34753999 PMCID: PMC8578445 DOI: 10.1038/s41598-021-01406-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 10/22/2021] [Indexed: 01/01/2023] Open
Abstract
While scene context is known to facilitate object recognition, little is known about which contextual "ingredients" are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses-markers of semantic violations-for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing-even in the absence of spatial scene structure and object content-suggesting that material is one of the contextual "ingredients" driving scene context effects.
Collapse
Affiliation(s)
- Tim Lauer
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany.
| | - Filipp Schmidt
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany
| |
Collapse
|
21
|
Preißler L, Jovanovic B, Munzert J, Schmidt F, Fleming RW, Schwarzer G. Effects of visual and visual-haptic perception of material rigidity on reaching and grasping in the course of development. Acta Psychol (Amst) 2021; 221:103457. [PMID: 34883348 DOI: 10.1016/j.actpsy.2021.103457] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022] Open
Abstract
The development of material property perception for grasping objects is not well explored during early childhood. Therefore, we investigated infants', 3-year-old children's, and adults' unimanual grasping behavior and reaching kinematics for objects of different rigidity using a 3D motion capture system. In Experiment 1, 11-month-old infants and for purposes of comparison adults, and in Experiment 2, 3-year old children were encouraged to lift relatively heavy objects with one of two handles differing in rigidity after visual (Condition 1) and visual-haptic exploration (Condition 2). Experiment 1 revealed that 11-months-olds, after visual object exploration, showed no significant material preference, and thus did not consider the material to facilitate grasping. After visual-haptic object exploration and when grasping the contralateral handles, infants showed an unexpected preference for the soft handles, which were harder to use to lift the object. In contrast, adults generally grasped the rigid handle exploiting their knowledge about efficient and functional grasping in both conditions. Reaching kinematics were barely affected by rigidity, but rather by condition and age. Experiment 2 revealed that 3-year-olds no longer exhibit a preference for grasping soft handles, but still no adult-like preference for rigid handles in both conditions. This suggests that material rigidity plays a minor role in infants' grasping behavior when only visual material information is available. Also, 3-year-olds seem to be on an intermediate level in the development from (1) preferring the pleasant sensation of a soft fabric, to (2) preferring the efficient rigid handle.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| | - Bianca Jovanovic
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| | - Jörn Munzert
- Department of Sports Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394 Giessen, Germany.
| | - Filipp Schmidt
- Department of General Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F2, 35394 Giessen, Germany.
| | - Roland W Fleming
- Department of General Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F2, 35394 Giessen, Germany.
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus-Liebig-University Giessen, Otto-Behaghel-Str. 10 F1, 35394 Giessen, Germany.
| |
Collapse
|
22
|
Prokott KE, Tamura H, Fleming RW. Gloss perception: Searching for a deep neural network that behaves like humans. J Vis 2021; 21:14. [PMID: 34817568 PMCID: PMC8626854 DOI: 10.1167/jov.21.12.14] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 08/14/2021] [Indexed: 11/24/2022] Open
Abstract
The visual computations underlying human gloss perception remain poorly understood, and to date there is no image-computable model that reproduces human gloss judgments independent of shape and viewing conditions. Such a model could provide a powerful platform for testing hypotheses about the detailed workings of surface perception. Here, we made use of recent developments in artificial neural networks to test how well we could recreate human responses in a high-gloss versus low-gloss discrimination task. We rendered >70,000 scenes depicting familiar objects made of either mirror-like or near-matte textured materials. We trained numerous classifiers to distinguish the two materials in our images-ranging from linear classifiers using simple pixel statistics to convolutional neural networks (CNNs) with up to 12 layers-and compared their classifications with human judgments. To determine which classifiers made the same kinds of errors as humans, we painstakingly identified a set of 60 images in which human judgments are consistently decoupled from ground truth. We then conducted a Bayesian hyperparameter search to identify which out of several thousand CNNs most resembled humans. We found that, although architecture has only a relatively weak effect, high correlations with humans are somewhat more typical in networks of shallower to intermediate depths (three to five layers). We also trained deep convolutional generative adversarial networks (DCGANs) of different depths to recreate images based on our high- and low-gloss database. Responses from human observers show that two layers in a DCGAN can recreate gloss recognizably for human observers. Together, our results indicate that human gloss classification can best be explained by computations resembling early to mid-level vision.
Collapse
Affiliation(s)
- Konrad Eugen Prokott
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Giessen, Germany
| | - Hideki Tamura
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
- Japan Society for Promotion of Sciences, Chiyoda, Tokyo, Japan
| | - Roland W Fleming
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
23
|
Van Zuijlen MJP, Lin H, Bala K, Pont SC, Wijntjes MWA. Materials In Paintings (MIP): An interdisciplinary dataset for perception, art history, and computer vision. PLoS One 2021; 16:e0255109. [PMID: 34437544 PMCID: PMC8389402 DOI: 10.1371/journal.pone.0255109] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 07/09/2021] [Indexed: 11/18/2022] Open
Abstract
In this paper, we capture and explore the painterly depictions of materials to enable the study of depiction and perception of materials through the artists' eye. We annotated a dataset of 19k paintings with 200k+ bounding boxes from which polygon segments were automatically extracted. Each bounding box was assigned a coarse material label (e.g., fabric) and half was also assigned a fine-grained label (e.g., velvety, silky). The dataset in its entirety is available for browsing and downloading at materialsinpaintings.tudelft.nl. We demonstrate the cross-disciplinary utility of our dataset by presenting novel findings across human perception, art history and, computer vision. Our experiments include a demonstration of how painters create convincing depictions using a stylized approach. We further provide an analysis of the spatial and probabilistic distributions of materials depicted in paintings, in which we for example show that strong patterns exists for material presence and location. Furthermore, we demonstrate how paintings could be used to build more robust computer vision classifiers by learning a more perceptually relevant feature representation. Additionally, we demonstrate that training classifiers on paintings could be used to uncover hidden perceptual cues by visualizing the features used by the classifiers. We conclude that our dataset of painterly material depictions is a rich source for gaining insights into the depiction and perception of materials across multiple disciplines and hope that the release of this dataset will drive multidisciplinary research.
Collapse
Affiliation(s)
| | - Hubert Lin
- Computer Science Department, Cornell University, Ithaca, New York, United States of America
| | - Kavita Bala
- Computer Science Department, Cornell University, Ithaca, New York, United States of America
| | - Sylvia C Pont
- Perceptual Intelligence Lab, Delft University of Technology, Delft, The Netherlands
| | - Maarten W A Wijntjes
- Perceptual Intelligence Lab, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
24
|
Orima T, Motoyoshi I. Analysis and Synthesis of Natural Texture Perception From Visual Evoked Potentials. Front Neurosci 2021; 15:698940. [PMID: 34381330 PMCID: PMC8350323 DOI: 10.3389/fnins.2021.698940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
The primate visual system analyzes statistical information in natural images and uses it for the immediate perception of scenes, objects, and surface materials. To investigate the dynamical encoding of image statistics in the human brain, we measured visual evoked potentials (VEPs) for 166 natural textures and their synthetic versions, and performed a reverse-correlation analysis of the VEPs and representative texture statistics of the image. The analysis revealed occipital VEP components strongly correlated with particular texture statistics. VEPs correlated with low-level statistics, such as subband SDs, emerged rapidly from 100 to 250 ms in a spatial frequency dependent manner. VEPs correlated with higher-order statistics, such as subband kurtosis and cross-band correlations, were observed at slightly later times. Moreover, these robust correlations enabled us to inversely estimate texture statistics from VEP signals via linear regression and to reconstruct texture images that appear similar to those synthesized with the original statistics. Additionally, we found significant differences in VEPs at 200-300 ms between some natural textures and their Portilla-Simoncelli (PS) synthesized versions, even though they shared almost identical texture statistics. This differential VEP was related to the perceptual "unnaturalness" of PS-synthesized textures. These results suggest that the visual cortex rapidly encodes image statistics hidden in natural textures specifically enough to predict the visual appearance of a texture, while it also represents high-level information beyond image statistics, and that electroencephalography can be used to decode these cortical signals.
Collapse
Affiliation(s)
- Taiki Orima
- Department of Life Sciences, The University of Tokyo, Tokyo, Japan.,Japan Society for the Promotion of Science, Tokyo, Japan
| | - Isamu Motoyoshi
- Department of Life Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
25
|
Unsupervised learning predicts human perception and misperception of gloss. Nat Hum Behav 2021; 5:1402-1417. [PMID: 33958744 PMCID: PMC8526360 DOI: 10.1038/s41562-021-01097-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 03/09/2021] [Indexed: 02/01/2023]
Abstract
Reflectance, lighting and geometry combine in complex ways to create images. How do we disentangle these to perceive individual properties, such as surface glossiness? We suggest that brains disentangle properties by learning to model statistical structure in proximal images. To test this hypothesis, we trained unsupervised generative neural networks on renderings of glossy surfaces and compared their representations with human gloss judgements. The networks spontaneously cluster images according to distal properties such as reflectance and illumination, despite receiving no explicit information about these properties. Intriguingly, the resulting representations also predict the specific patterns of ‘successes’ and ‘errors’ in human perception. Linearly decoding specular reflectance from the model’s internal code predicts human gloss perception better than ground truth, supervised networks or control models, and it predicts, on an image-by-image basis, illusions of gloss perception caused by interactions between material, shape and lighting. Unsupervised learning may underlie many perceptual dimensions in vision and beyond. Storrs et al. train unsupervised generative neural networks on glossy surfaces and show how gloss perception in humans may emerge in an unsupervised fashion from learning to model statistical structure.
Collapse
|
26
|
Schmid AC, Boyaci H, Doerschner K. Dynamic dot displays reveal material motion network in the human brain. Neuroimage 2020; 228:117688. [PMID: 33385563 DOI: 10.1016/j.neuroimage.2020.117688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 11/20/2020] [Accepted: 12/19/2020] [Indexed: 11/26/2022] Open
Abstract
There is growing research interest in the neural mechanisms underlying the recognition of material categories and properties. This research field, however, is relatively more recent and limited compared to investigations of the neural mechanisms underlying object and scene category recognition. Motion is particularly important for the perception of non-rigid materials, but the neural basis of non-rigid material motion remains unexplored. Using fMRI, we investigated which brain regions respond preferentially to material motion versus other types of motion. We introduce a new database of stimuli - dynamic dot materials - that are animations of moving dots that induce vivid percepts of various materials in motion, e.g. flapping cloth, liquid waves, wobbling jelly. Control stimuli were scrambled versions of these same animations and rigid three-dimensional rotating dots. Results showed that isolating material motion properties with dynamic dots (in contrast with other kinds of motion) activates a network of cortical regions in both ventral and dorsal visual pathways, including areas normally associated with the processing of surface properties and shape, and extending to somatosensory and premotor cortices. We suggest that such a widespread preference for material motion is due to strong associations between stimulus properties. For example viewing dots moving in a specific pattern not only elicits percepts of material motion; one perceives a flexible, non-rigid shape, identifies the object as a cloth flapping in the wind, infers the object's weight under gravity, and anticipates how it would feel to reach out and touch the material. These results are a first important step in mapping out the cortical architecture and dynamics in material-related motion processing.
Collapse
Affiliation(s)
- Alexandra C Schmid
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany.
| | - Huseyin Boyaci
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany; Department of Psychology, A.S. Brain Research Center, and National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey.
| | - Katja Doerschner
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany; Department of Psychology, A.S. Brain Research Center, and National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
27
|
Schmidt F, Fleming RW, Valsecchi M. Softness and weight from shape: Material properties inferred from local shape features. J Vis 2020; 20:2. [PMID: 32492099 PMCID: PMC7416911 DOI: 10.1167/jov.20.6.2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Object shape is an important cue to material identity and for the estimation of material properties. Shape features can affect material perception at different levels: at a microscale (surface roughness), mesoscale (textures and local object shape), or megascale (global object shape) level. Examples for local shape features include ripples in drapery, clots in viscous liquids, or spiraling creases in twisted objects. Here, we set out to test the role of such shape features on judgments of material properties softness and weight. For this, we created a large number of novel stimuli with varying surface shape features. We show that those features have distinct effects on softness and weight ratings depending on their type, as well as amplitude and frequency, for example, increasing numbers and pointedness of spikes makes objects appear harder and heavier. By also asking participants to name familiar objects, materials, and transformations they associate with our stimuli, we can show that softness and weight judgments do not merely follow from semantic associations between particular stimuli and real-world object shapes. Rather, softness and weight are estimated from surface shape, presumably based on learned heuristics about the relationship between a particular expression of surface features and material properties. In line with this, we show that correlations between perceived softness or weight and surface curvature vary depending on the type of surface feature. We conclude that local shape features have to be considered when testing the effects of shape on the perception of material properties such as softness and weight.
Collapse
|
28
|
Abstract
Many objects that we encounter have typical material qualities: spoons are hard, pillows are soft, and Jell-O dessert is wobbly. Over a lifetime of experiences, strong associations between an object and its typical material properties may be formed, and these associations not only include how glossy, rough, or pink an object is, but also how it behaves under force: we expect knocked over vases to shatter, popped bike tires to deflate, and gooey grilled cheese to hang between two slices of bread when pulled apart. Here we ask how such rich visual priors affect the visual perception of material qualities and present a particularly striking example of expectation violation. In a cue conflict design, we pair computer-rendered familiar objects with surprising material behaviors (a linen curtain shattering, a porcelain teacup wrinkling, etc.) and find that material qualities are not solely estimated from the object's kinematics (i.e., its physical [atypical] motion while shattering, wrinkling, wobbling etc.); rather, material appearance is sometimes “pulled” toward the “native” motion, shape, and optical properties that are associated with this object. Our results, in addition to patterns we find in response time data, suggest that visual priors about materials can set up high-level expectations about complex future states of an object and show how these priors modulate material appearance.
Collapse
Affiliation(s)
| | | | - Katja Doerschner
- Justus Liebig University, Giessen, Germany.,Bilkent University, Ankara, Turkey.,
| |
Collapse
|
29
|
Abstract
A key challenge for the visual system entails the extraction of constant properties of objects from sensory information that varies moment by moment due to changes in viewing conditions. Although successful performance in constancy tasks requires cooperation between perception and working memory, the function of the memory system has been under-represented in recent material perception literature. Here, we addressed the limits of material constancy by elucidating if and how working memory is involved in constancy tasks by using a variety of material stimuli, such as metals, glass, and translucent objects. We conducted experiments with a simultaneous and a successive matching-to-sample paradigm in which participants matched the perceived material properties of objects with or without a temporal delay under varying illumination contexts. The current study combined a detailed analysis of matching errors, data on the strategy use obtained via a self-report questionnaire, and the statistical image analysis of diagnostic image cues used for material discrimination. We found a comparable material constancy between simultaneous and successive matching conditions, and it was suggested that, in both matching conditions, participants used similar information processing strategies for the discrimination of materials. The study provides converging evidence on the critical role of working memory in material constancy, where working memory serves as a shared processing bottleneck that constrains both simultaneous and successive material constancy.
Collapse
Affiliation(s)
- Hiroyuki Tsuda
- Keio Advanced Research Center, Keio University, Tokyo, Japan
| | - Munendo Fujimichi
- Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan
| | | | - Jun Saiki
- Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan
| |
Collapse
|
30
|
Ingvarsdóttir KÓ, Balkenius C. The Visual Perception of Material Properties Affects Motor Planning in Prehension: An Analysis of Temporal and Spatial Components of Lifting Cups. Front Psychol 2020; 11:215. [PMID: 32132955 PMCID: PMC7040203 DOI: 10.3389/fpsyg.2020.00215] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/30/2020] [Indexed: 11/23/2022] Open
Abstract
The current study examined the role of visually perceived material properties in motor planning, where we analyzed the temporal and spatial components of motor movements during a seated reaching task. We recorded hand movements of 14 participants in three dimensions while they lifted and transported paper cups that differed in weight and glossiness. Kinematic- and spatial analysis revealed speed-accuracy trade-offs to depend on visual material properties of the objects, in which participants reached slower and grabbed closer to the center of mass for stimuli that required to be handled with greater precision. We found grasp-preparation during the first encounters with the cups was not only governed by the anticipated weight of the cups, but also by their visual material properties, namely glossiness. After a series of object lifting, the execution of reaching, the grip position, and the transportation of the cups from one location to another were preeminently guided by the object weight. We also found the planning phase in reaching to be guided by the expectation of hardness and surface gloss. The findings promote the role of general knowledge of material properties in reach-to-grasp movements, in which visual material properties are incorporated in the spatio-temporal components.
Collapse
|
31
|
Abstract
Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. But recent advances in unsupervised deep learning provide a framework for explaining how we learn to see them. We suggest that perception does not involve estimating physical quantities like reflectance or lighting. Instead, representations emerge from learning to encode and predict the visual input as efficiently and accurately as possible. Neural networks can be trained to compress natural images or to predict frames in movies without 'ground truth' data about the outside world. Yet, to succeed, such systems may automatically discover how to disentangle distal causal factors. Such 'statistical appearance models' potentially provide a coherent explanation of both failures and successes in perception.
Collapse
|
32
|
|
33
|
Kawabe T. Visual assessment of causality in the Poisson effect. Sci Rep 2019; 9:14993. [PMID: 31628392 PMCID: PMC6802190 DOI: 10.1038/s41598-019-51509-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 10/02/2019] [Indexed: 11/25/2022] Open
Abstract
When a material is stretched along a spatial axis, it is causally compressed along the orthogonal axis, as quantified in the Poisson effect. The present study examined how human observers assess this causality. Stimuli were video clips of a white rectangular region that was horizontally stretched while it was vertically compressed, with spatially sinusoidal modulation of the magnitude of vertical compressions. It was found that the Poisson’s ratio—a well-defined index of the Poisson effect—was not an explanatory factor for the degree of reported causality. Instead, reported causality was explained by image features related to deformation magnitudes. Comparing a material’s shape before and after deformation was not always required for the causality assessment. This suggests that human observers determine causality in the Poisson effect by using heuristics based on image features not necessarily related to the physical properties of the material.
Collapse
|
34
|
Abstract
To understand the processes behind seeing light, we need to integrate knowledge about the incoming optical structure, its perception, and how light interacts with material, shape, and space-objectively and subjectively. To that end, we need a novel approach to the science of light, namely, a transdisciplinary science of appearance, integrating optical, perceptual, and design knowledge and methods. In this article, I review existing literature as a basis for such a synthesis, which should discuss light in its full complexity, including its spatial properties and interactions with materials, shape, and space. I propose to investigate this by representing the endless variety of light, materials, shapes, and space as canonical modes and their combinations.
Collapse
Affiliation(s)
- Sylvia C Pont
- Perceptual Intelligence Lab, Department of Industrial Design Engineering, Delft University of Technology, 2628CE Delft, Netherlands;
| |
Collapse
|
35
|
Wendt G, Faul F. Factors Influencing the Detection of Spatially-Varying Surface Gloss. Iperception 2019; 10:2041669519866843. [PMID: 31523415 PMCID: PMC6732868 DOI: 10.1177/2041669519866843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/10/2019] [Indexed: 11/15/2022] Open
Abstract
In this study, we investigate the ability of human observers to detect spatial inhomogeneities in the glossiness of a surface and how the performance in this task depends on several context factors. We used computer-generated stimuli showing a single object in three-dimensional space whose surface was split into two spatial areas with different microscale smoothness. The context factors were the kind of illumination, the object's shape, the availability of motion information, the degree of edge blurring, the spatial proportions between the two areas of different smoothness, and the general smoothness level. Detection thresholds were determined using a two-alternative forced choice (2AFC) task implemented in a double random staircase procedure, where the subjects had to indicate for each stimulus whether or not the surface appears to have a spatially uniform material. We found evidence that two different cues are used for this task: luminance differences and differences in highlight properties between areas of different microscale smoothness. While the visual system seems to be highly sensitive in detecting gloss differences based on luminance contrast information, detection thresholds were considerably higher when the judgment was mainly based on differences in highlight features, such as their size, intensity, and sharpness.
Collapse
Affiliation(s)
- Gunnar Wendt
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| | - Franz Faul
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| |
Collapse
|
36
|
Bi W, Jin P, Nienborg H, Xiao B. Manipulating patterns of dynamic deformation elicits the impression of cloth with varying stiffness. J Vis 2019; 19:18. [DOI: 10.1167/19.5.18] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Wenyan Bi
- Department of Computer Science, American University, Washington, DC, USA
- ://sites.google.com/site/wenyanbi0819
| | - Peiran Jin
- Department of Physics, Georgetown University, Washington, DC, USA
| | - Hendrikje Nienborg
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- ://www.cin.uni-tuebingen.de/research/research-groups/junior-research-groups/neurophysiology-of-visual-and-decision-processes/staff/person-detail/dr-hendrikje-nienborg.html
| | - Bei Xiao
- Department of Computer Science, American University, Washington, DC, USA
- ://sites.google.com/site/beixiao/
| |
Collapse
|
37
|
Dövencioglu DN, van Doorn A, Koenderink J, Doerschner K. Seeing through transparent layers. J Vis 2019; 18:25. [PMID: 30267077 DOI: 10.1167/18.9.25] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The human visual system is remarkably good at decomposing local and global deformations in the flow of visual information into different perceptual layers, a critical ability for daily tasks such as driving through rain or fog or catching that evasive trout. In these scenarios, changes in the visual information might be due to a deforming object or deformations due to a transparent medium, such as structured glass or water, or a combination of these. How does the visual system use image deformations to make sense of layering due to transparent materials? We used eidolons to investigate equivalence classes for perceptually similar transparent layers. We created a stimulus space for perceptual equivalents of a fiducial scene by systematically varying the local disarray parameters reach and grain. This disarray in eidolon space leads to distinct impressions of transparency, specifically, high reach and grain values vividly resemble water whereas smaller grain values appear diffuse like structured glass. We asked observers to adjust image deformations so that the objects in the scene looked like they were seen (a) under water, (b) behind haze, or (c) behind structured glass. Observers adjusted image deformation parameters by moving the mouse horizontally (grain) and vertically (reach). For two conditions, water and glass, we observed high intraobserver consistency: responses were not random. Responses yielded a concentrated equivalence class for water and structured glass.
Collapse
Affiliation(s)
- Dicle N Dövencioglu
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.,National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey
| | - Andrea van Doorn
- KU Leuven, Leuven, Belgium.,Utrecht University, Utrecht, The Netherlands
| | - Jan Koenderink
- KU Leuven, Leuven, Belgium.,Utrecht University, Utrecht, The Netherlands
| | - Katja Doerschner
- Department of Psychology, Bilkent University, Ankara, Turkey.,National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey.,Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany
| |
Collapse
|
38
|
Radonjić A, Cottaris NP, Brainard DH. The relative contribution of color and material in object selection. PLoS Comput Biol 2019; 15:e1006950. [PMID: 30978187 PMCID: PMC6490924 DOI: 10.1371/journal.pcbi.1006950] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 04/30/2019] [Accepted: 03/10/2019] [Indexed: 01/19/2023] Open
Abstract
Object perception is inherently multidimensional: information about color, material, texture and shape all guide how we interact with objects. We developed a paradigm that quantifies how two object properties (color and material) combine in object selection. On each experimental trial, observers viewed three blob-shaped objects-the target and two tests-and selected the test that was more similar to the target. Across trials, the target object was fixed, while the tests varied in color (across 7 levels) and material (also 7 levels, yielding 49 possible stimuli). We used an adaptive trial selection procedure (Quest+) to present, on each trial, the stimulus test pair that is most informative of underlying processes that drive selection. We present a novel computational model that allows us to describe observers' selection data in terms of (1) the underlying perceptual stimulus representation and (2) a color-material weight, which quantifies the relative importance of color vs. material in selection. We document large individual differences in the color-material weight across the 12 observers we tested. Furthermore, our analyses reveal limits on how precisely selection data simultaneously constrain perceptual representations and the color-material weight. These limits should guide future efforts towards understanding the multidimensional nature of object perception.
Collapse
Affiliation(s)
- Ana Radonjić
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Nicolas P. Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
39
|
Singh V, Cottaris NP, Heasly BS, Brainard DH, Burge J. Computational luminance constancy from naturalistic images. J Vis 2018; 18:19. [PMID: 30593061 PMCID: PMC6314111 DOI: 10.1167/18.13.19] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The human visual system supports stable percepts of object color even though the light that reflects from object surfaces varies significantly with the scene illumination. To understand the computations that support stable color perception, we study how estimating a target object's luminous reflectance factor (LRF; a measure of the light reflected from the object under a standard illuminant) depends on variation in key properties of naturalistic scenes. Specifically, we study how variation in target object reflectance, illumination spectra, and the reflectance of background objects in a scene impact estimation of a target object's LRF. To do this, we applied supervised statistical learning methods to the simulated excitations of human cone photoreceptors, obtained from labeled naturalistic images. The naturalistic images were rendered with computer graphics. The illumination spectra of the light sources and the reflectance spectra of the surfaces in the scene were generated using statistical models of natural spectral variation. Optimally decoding target object LRF from the responses of a small learned set of task-specific linear receptive fields that operate on a contrast representation of the cone excitations yields estimates that are within 13% of the correct LRF. Our work provides a framework for evaluating how different sources of scene variability limit performance on luminance constancy.
Collapse
Affiliation(s)
- Vijay Singh
- Computational Neuroscience Initiative, Department of Physics, University of Pennsylvania, Philadelphia, PA, USA
| | - Nicolas P Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Benjamin S Heasly
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Johannes Burge
- Neuroscience Graduate Group, Bioengineering Graduate Group, Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
40
|
Brainard DH, Cottaris NP, Radonjić A. The perception of colour and material in naturalistic tasks. Interface Focus 2018; 8:20180012. [PMID: 29951192 DOI: 10.1098/rsfs.2018.0012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2018] [Indexed: 12/12/2022] Open
Abstract
Perceived object colour and material help us to select and interact with objects. Because there is no simple mapping between the pattern of an object's image on the retina and its physical reflectance, our perceptions of colour and material are the result of sophisticated visual computations. A long-standing goal in vision science is to describe how these computations work, particularly as they act to stabilize perceived colour and material against variation in scene factors extrinsic to object surface properties, such as the illumination. If we take seriously the notion that perceived colour and material are useful because they help guide behaviour in natural tasks, then we need experiments that measure and models that describe how they are used in such tasks. To this end, we have developed selection-based methods and accompanying perceptual models for studying perceived object colour and material. This focused review highlights key aspects of our work. It includes a discussion of future directions and challenges, as well as an outline of a computational observer model that incorporates early, known, stages of visual processing and that clarifies how early vision shapes selection performance.
Collapse
Affiliation(s)
- David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Nicolas P Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ana Radonjić
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
41
|
Abstract
Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.
Collapse
Affiliation(s)
- Shin'ya Nishida
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Takahiro Kawabe
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Masataka Sawayama
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Taiki Fukiage
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| |
Collapse
|