151
|
McDougall S, Reppa I, Kulik J, Taylor A. What makes icons appealing? The role of processing fluency in predicting icon appeal in different task contexts. APPLIED ERGONOMICS 2016; 55:156-172. [PMID: 26995046 DOI: 10.1016/j.apergo.2016.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Revised: 11/29/2015] [Accepted: 02/02/2016] [Indexed: 06/05/2023]
Abstract
Although icons appear on almost all interfaces, there is a paucity of research examining the determinants of icon appeal. The experiments reported here examined the icon characteristics determining appeal and the extent to which processing fluency - the subjective ease with which individuals process information - was used as a heuristic to guide appeal evaluations. Participants searched for, and identified, icons in displays. The initial appeal of icons was held constant while ease of processing was manipulated by systematically varying the complexity and familiarity of the icons presented and the type of task participants were asked to carry out. Processing fluency reliably influenced users' appeal ratings and appeared to be based on users' unconscious awareness of the ease with which they carried out experimental tasks.
Collapse
Affiliation(s)
- Siné McDougall
- Psychology Department, Faculty of Science & Technology, Bournemouth University, Fern Barrow, Poole BH12 5BB, UK.
| | - Irene Reppa
- Psychology Department, Swansea University, Swansea SA2 8PP, UK.
| | - Jozef Kulik
- Psychology Department, Faculty of Science & Technology, Bournemouth University, Fern Barrow, Poole BH12 5BB, UK
| | - Alisdair Taylor
- Psychology Department, Faculty of Science & Technology, Bournemouth University, Fern Barrow, Poole BH12 5BB, UK
| |
Collapse
|
152
|
Choo H, Walther DB. Contour junctions underlie neural representations of scene categories in high-level human visual cortex. Neuroimage 2016; 135:32-44. [PMID: 27118087 DOI: 10.1016/j.neuroimage.2016.04.021] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Revised: 03/16/2016] [Accepted: 04/08/2016] [Indexed: 10/21/2022] Open
Abstract
Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scene-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.
Collapse
Affiliation(s)
- Heeyoung Choo
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada.
| | - Dirk B Walther
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
153
|
Candan A, Cutting JE, DeLong JE. RSVP at the movies: dynamic images are remembered better than static images when resources are limited. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1159636] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
154
|
Ramkumar P, Hansen BC, Pannasch S, Loschky LC. Visual information representation and rapid-scene categorization are simultaneous across cortex: An MEG study. Neuroimage 2016; 134:295-304. [PMID: 27001497 DOI: 10.1016/j.neuroimage.2016.03.027] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Revised: 03/04/2016] [Accepted: 03/13/2016] [Indexed: 11/17/2022] Open
Abstract
Perceiving the visual world around us requires the brain to represent the features of stimuli and to categorize the stimulus based on these features. Incorrect categorization can result either from errors in visual representation or from errors in processes that lead to categorical choice. To understand the temporal relationship between the neural signatures of such systematic errors, we recorded whole-scalp magnetoencephalography (MEG) data from human subjects performing a rapid-scene categorization task. We built scene category decoders based on (1) spatiotemporally resolved neural activity, (2) spatial envelope (SpEn) image features, and (3) behavioral responses. Using confusion matrices, we tracked how well the pattern of errors from neural decoders could be explained by SpEn decoders and behavioral errors, over time and across cortical areas. Across the visual cortex and the medial temporal lobe, we found that both SpEn and behavioral errors explained unique variance in the errors of neural decoders. Critically, these effects were nearly simultaneous, and most prominent between 100 and 250ms after stimulus onset. Thus, during rapid-scene categorization, neural processes that ultimately result in behavioral categorization are simultaneous and co-localized with neural processes underlying visual information representation.
Collapse
Affiliation(s)
- Pavan Ramkumar
- Brain Research Unit, O.V. Lounasmaa Laboratory, Aalto University School of Science, Espoo, Finland.
| | - Bruce C Hansen
- Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA.
| | - Sebastian Pannasch
- Brain Research Unit, O.V. Lounasmaa Laboratory, Aalto University School of Science, Espoo, Finland; Department of Psychology, Technische Universtät Dresden, Dresden, Germany.
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA.
| |
Collapse
|
155
|
Abstract
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Collapse
|
156
|
Perceived egocentric distance sensitivity and invariance across scene-selective cortex. Cortex 2016; 77:155-163. [PMID: 26963085 DOI: 10.1016/j.cortex.2016.02.006] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Revised: 11/02/2015] [Accepted: 02/08/2016] [Indexed: 11/23/2022]
Abstract
Behavioral studies in many species and studies in robotics have demonstrated two sources of information critical for visually-guided navigation: sense (left-right) information and egocentric distance (proximal-distal) information. A recent fMRI study found sensitivity to sense information in two scene-selective cortical regions, the retrosplenial complex (RSC) and the occipital place area (OPA), consistent with hypotheses that these regions play a role in human navigation. Surprisingly, however, another scene-selective region, the parahippocampal place area (PPA), was not sensitive to sense information, challenging hypotheses that this region is directly involved in navigation. Here we examined how these regions encode egocentric distance information (e.g., a house seen from close up versus far away), another type of information crucial for navigation. Using fMRI adaptation and a regions-of-interest analysis approach in human adults, we found sensitivity to egocentric distance information in RSC and OPA, while PPA was not sensitive to such information. These findings further support that RSC and OPA are directly involved in navigation, while PPA is not, consistent with the hypothesis that scenes may be processed by distinct systems guiding navigation and recognition.
Collapse
|
157
|
|
158
|
Nordhjem B, Ćurčić-Blake B, Meppelink AM, Renken RJ, de Jong BM, Leenders KL, van Laar T, Cornelissen FW. Lateral and Medial Ventral Occipitotemporal Regions Interact During the Recognition of Images Revealed from Noise. Front Hum Neurosci 2016; 9:678. [PMID: 26778997 PMCID: PMC4701927 DOI: 10.3389/fnhum.2015.00678] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 12/01/2015] [Indexed: 11/13/2022] Open
Abstract
Several studies suggest different functional roles for the medial and the lateral sections of the ventral visual cortex in object recognition. Texture and surface information is processed in medial sections, while shape information is processed in lateral sections. This begs the question whether and how these functionally specialized sections interact with each other and with early visual cortex to facilitate object recognition. In the current research, we set out to answer this question. In an fMRI study, 13 subjects viewed and recognized images of objects and animals that were gradually revealed from noise while their brains were being scanned. We applied dynamic causal modeling (DCM)-a method to characterize network interactions-to determine the modulatory effect of object recognition on a network comprising the primary visual cortex (V1), the lingual gyrus (LG) in medial ventral cortex and the lateral occipital cortex (LO). We found that object recognition modulated the bilateral connectivity between LG and LO. Moreover, the feed-forward connectivity from V1 to LG and LO was modulated, while there was no evidence for feedback from these regions to V1 during object recognition. In particular, the interaction between medial and lateral areas supports a framework in which visual recognition of objects is achieved by networked regions that integrate information on image statistics, scene content and shape-rather than by a single categorically specialized region-within the ventral visual cortex.
Collapse
Affiliation(s)
- Barbara Nordhjem
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of GroningenGroningen, Netherlands; NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| | - Branislava Ćurčić-Blake
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen Groningen, Netherlands
| | - Anne Marthe Meppelink
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands; Department of Neurology, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| | - Remco J Renken
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of Groningen Groningen, Netherlands
| | - Bauke M de Jong
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands; Department of Neurology, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| | - Klaus L Leenders
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands; Department of Neurology, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| | - Teus van Laar
- NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands; Department of Neurology, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| | - Frans W Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of GroningenGroningen, Netherlands; NeuroImaging Center, Department of Neuroscience, University Medical Center Groningen, University of GroningenGroningen, Netherlands
| |
Collapse
|
159
|
McClelland T, Bayne T. Concepts, contents, and consciousness. Neurosci Conscious 2016; 2016:niv012. [PMID: 30135743 PMCID: PMC6089095 DOI: 10.1093/nc/niv012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Revised: 12/12/2015] [Accepted: 12/20/2015] [Indexed: 11/29/2022] Open
Abstract
In his paper 'Are we ever aware of concepts? A critical question for the Global Neuronal Workspace, Integrated Information, and Attended Intermediate-Level Representation theories of consciousness' (2015, this journal), Kemmerer defends a conservative account of consciousness, according to which concepts and thoughts do not characterize the contents of consciousness, and then uses that account to argue against both the Global Neuronal Workspace theory of consciousness and Integrated Information Theory of Consciousness, and as a point in favour of Prinz's Attended Intermediate-level Representations theory. We argue that there are a number of respects in which the contrast between conservative and liberal conceptions of the admissible contents of consciousness is more complex than Kemmerer's discussion suggests. We then consider Kemmerer's case for conservatism, arguing that it lumbers liberals with commitments that they need not - and in our view should not - endorse. We also argue that Kemmerer's attempt to use his case for conservatism against the Global Neuronal Workspace and Integrated Information theories of consciousness on the one hand, and as a point in favour of Prinz's Attended Intermediate Representations theory on the other hand, is problematic. Finally, we consider Kemmerer's overall strategy of using an account of the admissible contents of consciousness to evaluate theories of consciousness, and suggest that here too there are complications that Kemmerer's discussion overlooks.
Collapse
Affiliation(s)
- Tom McClelland
- Department of Philosophy, University of Manchester, Oxford Road, Manchester, M13 9PL, UK and
| | - Tim Bayne
- Department of Philosophy, University of Manchester, Oxford Road, Manchester, M13 9PL, UK and
- Rotman Institute of Philosophy, University of Western Ontario, Stevenson Hall 2150, London, Ontario, Canada N6A 5B8
| |
Collapse
|
160
|
Marin MM, Leder H. Effects of presentation duration on measures of complexity in affective environmental scenes and representational paintings. Acta Psychol (Amst) 2016; 163:38-58. [PMID: 26595281 DOI: 10.1016/j.actpsy.2015.10.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2015] [Revised: 07/15/2015] [Accepted: 10/11/2015] [Indexed: 11/30/2022] Open
Abstract
Complexity constitutes an integral part of humans' environment and is inherent to information processing. However, little is known about the dynamics of visual complexity perception of affective environmental scenes (IAPS pictures) and artworks, such as affective representational paintings. In three experiments, we studied the time course of visual complexity perception by varying presentation duration and comparing subjective ratings with objective measures of complexity. In Experiment 1, 60 females rated 96 IAPS pictures, presented either for 1, 5, or 25s, for familiarity, complexity, pleasantness and arousal. In Experiment 2, another 60 females rated 96 representational paintings. Mean ratings of complexity and pleasantness changed according to presentation duration in a similar vein in both experiments, suggesting an inverted U-shape. No common pattern of results was observed for arousal and familiarity ratings across the two picture sets. The correlations between subjective and objective measures of complexity increased with longer exposure durations for IAPS pictures, but results were more ambiguous for paintings. Experiment 3 explored the time course of the multidimensionality of visual complexity perception. Another 109 females rated the number of objects, their disorganization and the differentiation between a figure-ground vs. complex scene composition of pictures presented for 1 and 5s. The multidimensionality of visual complexity only clearly emerged in the 5-s condition. In both picture sets, the strength of the correlations with objective measures depended on the type of subdimension of complexity and was less affected by presentation duration than correlations with general complexity in Experiments 1 and 2. These results have clear implications for perceptual and cognitive theories, especially for those of esthetic experiences, in which the dynamical changes of complexity perception need to be integrated.
Collapse
Affiliation(s)
- Manuela M Marin
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria.
| | - Helmut Leder
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| |
Collapse
|
161
|
Coco MI, Keller F, Malcolm GL. Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory. Cogn Sci 2015; 40:1995-2024. [DOI: 10.1111/cogs.12313] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Revised: 05/06/2015] [Accepted: 07/07/2015] [Indexed: 11/26/2022]
|
162
|
Lee SA, Ferrari A, Vallortigara G, Sovrano VA. Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behav Processes 2015; 119:116-22. [DOI: 10.1016/j.beproc.2015.07.012] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2015] [Revised: 07/21/2015] [Accepted: 07/22/2015] [Indexed: 12/16/2022]
|
163
|
Abstract
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Collapse
|
164
|
|
165
|
Aminoff EM, Tarr MJ. Associative Processing Is Inherent in Scene Perception. PLoS One 2015; 10:e0128840. [PMID: 26070142 PMCID: PMC4467091 DOI: 10.1371/journal.pone.0128840] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Accepted: 04/30/2015] [Indexed: 11/18/2022] Open
Abstract
How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional roles of each region within the cortical network that mediates scene processing.
Collapse
Affiliation(s)
- Elissa M. Aminoff
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States of America
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States of America
- * E-mail:
| | - Michael J. Tarr
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States of America
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States of America
| |
Collapse
|
166
|
Levy J, Vidal JR, Fries P, Démonet JF, Goldstein A. Selective Neural Synchrony Suppression as a Forward Gatekeeper to Piecemeal Conscious Perception. Cereb Cortex 2015; 26:3010-22. [PMID: 26045565 DOI: 10.1093/cercor/bhv114] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The emergence of conscious visual perception is assumed to ignite late (∼250 ms) gamma-band oscillations shortly after an initial (∼100 ms) forward sweep of neural sensory (nonconscious) information. However, this neural evidence is not utterly congruent with rich behavioral data which rather point to piecemeal (i.e., graded) perceptual processing. To address the unexplored neural mechanisms of piecemeal ignition of conscious perception, hierarchical script sensitivity of the putative visual word form area (VWFA) was exploited to signal null (i.e., sensory), partial (i.e., letter-level), and full (i.e., word-level) conscious perception. Two magnetoencephalography experiments were conducted in which healthy human participants viewed masked words (Experiment I: active task, Dutch words; Experiment II: passive task, Hebrew words) while high-frequency (broadband gamma) brain activity was measured. Findings revealed that piecemeal conscious perception did not ignite a linear piecemeal increase in oscillations. Instead, whereas late (∼250 ms) gamma-band oscillations signaled full conscious perception (i.e., word-level), partial conscious perception (i.e., letter-level) was signaled via the inhibition of the early (∼100 ms) forward sweep. This inhibition regulates the downstream broadcast to filter out irrelevant (i.e., masks) information. The findings thus highlight a local (VWFA) gatekeeping mechanism for conscious perception, operating by filtering out and in selective percepts.
Collapse
Affiliation(s)
- Jonathan Levy
- The Gonda Multidisciplinary Brain Research Center Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen 6500 GL, The Netherlands Inserm UMR825, Imagerie Cerebrale et Handicaps Neurologiques, Toulouse 31024, France Université de Toulouse, UPS, Toulouse 31062, France Centre Hospitalier Universitaire de Toulouse, Pôle Neurosciences, CHU Purpan, Toulouse 31024, France
| | - Juan R Vidal
- University Grenoble Alpes, LPNC, F-38040 Grenoble, France CNRS, LPNC, UMR 5105, F-38040 Grenoble, France
| | - Pascal Fries
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen 6500 GL, The Netherlands Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max-Planck-Society, Frankfurt 60528, Germany
| | - Jean-François Démonet
- Inserm UMR825, Imagerie Cerebrale et Handicaps Neurologiques, Toulouse 31024, France Université de Toulouse, UPS, Toulouse 31062, France Leenaards Memory Center, Department of Clinical Neurosciences, CHUV and University of Lausanne, Lausanne 1011, Switzerland
| | - Abraham Goldstein
- The Gonda Multidisciplinary Brain Research Center Department of Psychology, Bar-Ilan University, Ramat Gan 52900, Israel
| |
Collapse
|
167
|
Csathó Á, van der Linden D, Gács B. Natural Scene Recognition with Increasing Time-On-Task: The Role of Typicality and Global Image Properties. Q J Exp Psychol (Hove) 2015; 68:814-28. [DOI: 10.1080/17470218.2014.968592] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Human observers can recognize natural images very effectively. Yet, in the literature there is a debate about the extent to which the recognition of natural images requires controlled attentional processing. In the present study we address this topic by testing whether natural scene recognition is affected by mental fatigue. Mental fatigue is known to particularly compromise high-level, controlled attentional processing of local features. Effortless, automatic processing of more global features of an image stays relatively intact, however. We conducted a natural image categorization experiment ( N = 20) in which mental fatigue was induced by time-on-task (ToT). Stimuli were images from 5 natural scene categories. Semantic typicality (high or low) and the magnitude of 7 global image properties were determined for each image in separate rating experiments. Significant performance effects of typicality and global properties on scene recognition were found, but, despite a general decline in performance, these effects remained unchanged with increasing ToT. The findings support the importance of the global property processing in natural scene recognition and suggest that this process is insensitive to mental fatigue.
Collapse
Affiliation(s)
- Á. Csathó
- Institute of Behavioral Sciences, University of Pécs, Pécs, Hungary
| | - D. van der Linden
- Institute of Psychology, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - B. Gács
- Institute of Behavioral Sciences, University of Pécs, Pécs, Hungary
| |
Collapse
|
168
|
Intraub H, Morelli F, Gagnier KM. Visual, haptic and bimodal scene perception: evidence for a unitary representation. Cognition 2015; 138:132-47. [PMID: 25725370 DOI: 10.1016/j.cognition.2015.01.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Revised: 01/21/2015] [Accepted: 01/25/2015] [Indexed: 11/25/2022]
Abstract
Participants studied seven meaningful scene-regions bordered by removable boundaries (30s each). In Experiment 1 (N = 80) participants used visual or haptic exploration and then minutes later, reconstructed boundary position using the same or the alternate modality. Participants in all groups shifted boundary placement outward (boundary extension), but visual study yielded the greater error. Critically, this modality-specific difference in boundary extension transferred without cost in the cross-modal conditions, suggesting a functionally unitary scene representation. In Experiment 2 (N = 20), bimodal study led to boundary extension that did not differ from haptic exploration alone, suggesting that bimodal spatial memory was constrained by the more "conservative" haptic modality. In Experiment 3 (N = 20), as in picture studies, boundary memory was tested 30s after viewing each scene-region and as with pictures, boundary extension still occurred. Results suggest that scene representation is organized around an amodal spatial core that organizes bottom-up information from multiple modalities in combination with top-down expectations about the surrounding world.
Collapse
|
169
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
170
|
Dickinson CA, LaCombe DC. Objects influence the shape of remembered views: examining global and local aspects of boundary extension. Perception 2015; 43:731-53. [PMID: 25549505 DOI: 10.1068/p7631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Boundary extension is a constructive memory error for views of scenes in which viewx tendto be remembered as more spatially expansive than they appeared. In seven experiments xwe examinedwhether local differences in boundary extension within views might exist by presenting par ticipants ith overhead views of single, elongated objects or object pairs on textured ground surfaces i:n whichobjects were oriented either vertically or horizontally. Memory for views' spatial expanse xwas testedwith either a view-recognition test or a border-adjustment test in which participants could ad just thespatial expanse of a test view using the computer's mouse. The border-adjustment test was uused tossess local boundary extension (primarily); the view-recognition test was used to assess paparticipants'emories for the overall spatial expanse of views (ie global boundary extension). Across exexperiments,n the border-adjustment tests specifically, participants showed more spatial expanse along the obobject'songer axis, in some cases restricting the view along the object's shorter axis. In addition, the rerecognition-t data revealed greater boundary extension for views with vertically oriented objects than for x.iviewsth horizontally oriented objects. Taken together, the results suggest that objects in scene x iviews canfect both local and global aspects of memory for spatial expanse of scene views.
Collapse
|
171
|
Aminoff EM, Toneva M, Shrivastava A, Chen X, Misra I, Gupta A, Tarr MJ. Applying artificial vision models to human scene understanding. Front Comput Neurosci 2015; 9:8. [PMID: 25698964 PMCID: PMC4316773 DOI: 10.3389/fncom.2015.00008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 01/15/2015] [Indexed: 12/01/2022] Open
Abstract
How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.
Collapse
Affiliation(s)
- Elissa M Aminoff
- Center for the Neural Basis of Cognition, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA
| | - Mariya Toneva
- Center for the Neural Basis of Cognition, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Machine Learning, Carnegie Mellon University Pittsburgh, PA, USA
| | | | - Xinlei Chen
- Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA
| | - Ishan Misra
- Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA
| | - Abhinav Gupta
- Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA
| | - Michael J Tarr
- Center for the Neural Basis of Cognition, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA
| |
Collapse
|
172
|
Zujovic J, Pappas TN, Neuhoff DL, van Egmond R, de Ridder H. Effective and efficient subjective testing of texture similarity metrics. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2015; 32:329-342. [PMID: 26366606 DOI: 10.1364/josaa.32.000329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The development and testing of objective texture similarity metrics that agree with human judgments of texture similarity require, in general, extensive subjective tests. The effectiveness and efficiency of such tests depend on a careful analysis of the abilities of human perception and the application requirements. The focus of this paper is on defining performance requirements and testing procedures for objective texture similarity metrics. We identify three operating domains for evaluating the performance of a similarity metric: the ability to retrieve "identical" textures; the top of the similarity scale, where a monotonic relationship between metric values and subjective scores is desired; and the ability to distinguish between perceptually similar and dissimilar textures. Each domain has different performance goals and requires different testing procedures. For the third domain, we propose ViSiProG, a new Visual Similarity by Progressive Grouping procedure for conducting subjective experiments that organizes a texture database into clusters of visually similar images. The grouping is based on visual blending and greatly simplifies labeling image pairs as similar or dissimilar. ViSiProG collects subjective data in an efficient and effective manner, so that a relatively large database of textures can be accommodated. Experimental results and comparisons with structural texture similarity metrics demonstrate both the effectiveness of the proposed subjective testing procedure and the performance of the metrics.
Collapse
|
173
|
Lenoble Q, Bubbico G, Szaffarczyk S, Pasquier F, Boucart M. Scene categorization in Alzheimer's disease: a saccadic choice task. Dement Geriatr Cogn Dis Extra 2015; 5:1-12. [PMID: 25759714 PMCID: PMC4327701 DOI: 10.1159/000366054] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
AIMS We investigated the performance in scene categorization of patients with Alzheimer's disease (AD) using a saccadic choice task. METHOD 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor). RESULTS The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. CONCLUSIONS The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes.
Collapse
Affiliation(s)
- Quentin Lenoble
- Laboratoire de Neurosciences Fonctionnelles et Pathologies, Université Lille Nord de France, CNRS, Lille, France
| | - Giovanna Bubbico
- Laboratoire de Neurosciences Fonctionnelles et Pathologies, Université Lille Nord de France, CNRS, Lille, France
| | - Sébastien Szaffarczyk
- Laboratoire de Neurosciences Fonctionnelles et Pathologies, Université Lille Nord de France, CNRS, Lille, France
| | | | - Muriel Boucart
- Laboratoire de Neurosciences Fonctionnelles et Pathologies, Université Lille Nord de France, CNRS, Lille, France
| |
Collapse
|
174
|
Failing MF, Theeuwes J. Nonspatial attentional capture by previously rewarded scene semantics. VISUAL COGNITION 2015. [DOI: 10.1080/13506285.2014.990546] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
175
|
Banno H, Saiki J. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited. Perception 2015; 44:269-88. [DOI: 10.1068/p7683] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures.
Collapse
Affiliation(s)
- Hayaki Banno
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Jun Saiki
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, Kyoto 606-8501, Japan
| |
Collapse
|
176
|
Thompson MB, Tangen JM. The nature of expertise in fingerprint matching: experts can do a lot with a little. PLoS One 2014; 9:e114759. [PMID: 25517509 PMCID: PMC4269392 DOI: 10.1371/journal.pone.0114759] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 10/21/2014] [Indexed: 11/18/2022] Open
Abstract
Expert decision making often seems impressive, even miraculous. People with genuine expertise in a particular domain can perform quickly and accurately, and with little information. In the series of experiments presented here, we manipulate the amount of “information” available to a group of experts whose job it is to identify the source of crime scene fingerprints. In Experiment 1, we reduced the amount of information available to experts by inverting fingerprint pairs and adding visual noise. There was no evidence for an inversion effect—experts were just as accurate for inverted prints as they were for upright prints—but expert performance with artificially noisy prints was impressive. In Experiment 2, we separated matching and nonmatching print pairs in time. Experts were conservative, but they were still able to discriminate pairs of fingerprints that were separated by five-seconds, even though the task was quite different from their everyday experience. In Experiment 3, we separated the print pairs further in time to test the long-term memory of experts compared to novices. Long-term recognition memory for experts and novices was the same, with both performing around chance. In Experiment 4, we presented pairs of fingerprints quickly to experts and novices in a matching task. Experts were more accurate than novices, particularly for similar nonmatching pairs, and experts were generally more accurate when they had more time. It is clear that experts can match prints accurately when there is reduced visual information, reduced opportunity for direct comparison, and reduced time to engage in deliberate reasoning. These findings suggest that non-analytic processing accounts for a substantial portion of the variance in expert fingerprint matching accuracy. Our conclusion is at odds with general wisdom in fingerprint identification practice and formal training, and at odds with the claims and explanations that are offered in court during expert testimony.
Collapse
Affiliation(s)
- Matthew B. Thompson
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
- * E-mail: (MT); (JT)
| | - Jason M. Tangen
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
- * E-mail: (MT); (JT)
| |
Collapse
|
177
|
Abstract
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats.
Collapse
|
178
|
Abstract
PURPOSE To investigate the effect of age-related macular degeneration (AMD) on memory for spatial representations in realistic environments. METHODS Participants were 19 patients with AMD and 13 age-matched observers. In a short-term spatial memory task, observers were first presented with one view of a scene (the prime view), and their task was to change the viewpoint forward or backward to match the prime view. Memory performance was measured as the number of snapshots between the selected view and the prime view. RESULTS When selecting a match to the prime view, both people with AMD and those in the control group showed systematic biases toward the middle view of the range of snapshots. People with AMD exhibited a stronger middle bias after presentation of close and far prime views while navigating accurately after a middle prime view. No relation was found between visual acuity, visual field defect, or lesion size and the memory performance. CONCLUSIONS Memory tasks using indoor scenes can be accomplished when central vision is impoverished, as with AMD. Stronger center bias for a scene location suggests that people with AMD rely more on their memory of a canonical view.
Collapse
|
179
|
Gagne CR, MacEvoy SP. Do simultaneously viewed objects influence scene recognition individually or as groups? Two perceptual studies. PLoS One 2014; 9:e102819. [PMID: 25119715 PMCID: PMC4138008 DOI: 10.1371/journal.pone.0102819] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2013] [Accepted: 06/24/2014] [Indexed: 11/18/2022] Open
Abstract
The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process by which objects interfere with scene recognition is separate from the one through which they inform it.
Collapse
Affiliation(s)
- Christopher R. Gagne
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts, United States of America
| | - Sean P. MacEvoy
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
180
|
The gist of the abnormal: above-chance medical decision making in the blink of an eye. Psychon Bull Rev 2014; 20:1170-5. [PMID: 23771399 DOI: 10.3758/s13423-013-0459-3] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Very fast extraction of global structural and statistical regularities allows us to access the "gist"--the basic meaning--of real-world images in as little as 20 ms. Gist processing is central to efficient assessment and orienting in complex environments. This ability is probably based on our extensive experience with the regularities of the natural world. If that is so, would experts develop an ability to extract the gist from the artificial stimuli (e.g., medical images) with which they have extensive visual experience? Anecdotally, experts report some ability to categorize images as normal or abnormal before actually finding an abnormality. We tested the reality of this perception in two expert populations: radiologists and cytologists. Observers viewed brief (250- to 2,000-ms) presentations of medical images. The presence of abnormality was randomized across trials. The task was to rate the abnormality of an image on a 0-100 analog scale and then to attempt to localize that abnormality on a subsequent screen showing only the outline of the image. Both groups of experts had above-chance performance for detecting subtle abnormalities at all stimulus durations (cytologists d' ≈ 1.2 and radiologists d' ≈ 1), whereas the nonexpert control groups did not differ from chance (d' ≈ 0.23, d' ≈ 0.25). Furthermore, the experts' ability to localize these abnormalities was at chance levels, suggesting that categorization was based on a global signal, and not on fortuitous attention to a localized target. It is possible that this global signal could be exploited to improve clinical performance.
Collapse
|
181
|
|
182
|
Cant JS, Xu Y. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex. Cereb Cortex 2014; 25:4226-39. [PMID: 24964917 DOI: 10.1093/cercor/bhu145] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency.
Collapse
Affiliation(s)
- Jonathan S Cant
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Yaoda Xu
- Visions Sciences Laboratory, Department of Psychology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
183
|
Perdreau F, Cavanagh P. Drawing skill is related to the efficiency of encoding object structure. Iperception 2014; 5:101-19. [PMID: 25469216 PMCID: PMC4249990 DOI: 10.1068/i0635] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2013] [Revised: 04/16/2014] [Indexed: 11/26/2022] Open
Abstract
Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details.
Collapse
Affiliation(s)
- Florian Perdreau
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, CNRS UMR 8242, Paris, France; e-mail:
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, CNRS UMR 8242, Paris, France; e-mail:
| |
Collapse
|
184
|
Balas B, Woods R. Infant Preference for Natural Texture Statistics is Modulated by Contrast Polarity. INFANCY 2014; 19:262-280. [PMID: 26161044 DOI: 10.1111/infa.12050] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Adult observers are sensitive to statistical regularities present in natural images. Developmentally, research has shown that children do not show sensitivity to these natural regularities until approximately 8-10 years of age. This finding is surprising given that even infants gradually encode a range of high-level statistical regularities of their visual environment in the first year of life, We suggest that infants may in fact exhibit sensitivity to natural image statistics under circumstances where images of complex, natural textures, such as a photograph of rocks, are used as experimental stimuli and natural appearance is substantially manipulated. We tested this hypothesis by examining how infants' visual preference for real versus computer-generated synthetic textures was modulated by contrast negation, which produces an image similar to a photographic negative. We observed that older infants' (9-months of age) preferential looking behavior in this task was affected by contrast polarity, suggesting that the infant visual system is sensitive to deviations from natural texture appearance, including (1) discrepancies in appearance that differentiate natural and synthetic textures from one another and (2) the disruption of contrast polarity following negation. We discuss our results in the context of adult texture processing and the "perceptual narrowing" of visual recognition during the first year of life.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, North Dakota State University, Center for Visual and Cognitive Neuroscience, North Dakota State University
| | - Rebecca Woods
- Center for Visual and Cognitive Neuroscience, North Dakota State University, Department of Human Development and Family Sciences, North Dakota State University
| |
Collapse
|
185
|
Thibaut M, Tran THC, Szaffarczyk S, Boucart M. The contribution of central and peripheral vision in scene categorization: a study on people with central vision loss. Vision Res 2014; 98:46-53. [PMID: 24657253 DOI: 10.1016/j.visres.2014.03.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2014] [Revised: 03/05/2014] [Accepted: 03/08/2014] [Indexed: 11/24/2022]
Abstract
Studies in normally sighted people suggest that scene recognition is based on global physical properties and can be accomplished by the low resolution of peripheral vision. We examine the contribution of peripheral and central vision in scene gist recognition in patients with central vision loss and age-matched controls. Twenty-one patients with neovascular age related macular degeneration (AMD), with a visual acuity lower than 20/50, and 15 age-matched normally sighted controls participated in a natural/urban scene categorization task. The stimuli were colored photographs of natural scenes presented randomly at one of five spatial locations of a computer screen: centre, top left, top right, bottom left and bottom right at 12° eccentricity. Sensitivity (d') and response times were recorded. Normally sighted people exhibited higher sensitivity and shorter response times when the scene was presented centrally than for peripheral pictures. Sensitivity was lower and response times were longer for people with AMD than for controls at all spatial location. In contrast to controls patients were not better for central than for peripheral pictures. The results of normally sighted controls indicate that scene categorization can be accomplished by the low resolution of peripheral vision but central vision remains more efficient than peripheral vision for scene gist recognition. People with central vision loss likely categorized scenes on the basis of low frequency information both in normal peripheral vision and in low acuity central vision.
Collapse
Affiliation(s)
- Miguel Thibaut
- Laboratoire Neurosciences et Pathologies Fonctionnelles, Université Lille Nord de France, CNRS, France
| | - Thi Ha Chau Tran
- Laboratoire Neurosciences et Pathologies Fonctionnelles, Université Lille Nord de France, CNRS, France; Service d'Ophtalmologie, Hôpital Saint Vincent de Paul, Lille, France
| | - Sebastien Szaffarczyk
- Laboratoire Neurosciences et Pathologies Fonctionnelles, Université Lille Nord de France, CNRS, France
| | - Muriel Boucart
- Laboratoire Neurosciences et Pathologies Fonctionnelles, Université Lille Nord de France, CNRS, France.
| |
Collapse
|
186
|
Peelen MV, Kastner S. Attention in the real world: toward understanding its neural basis. Trends Cogn Sci 2014; 18:242-50. [PMID: 24630872 DOI: 10.1016/j.tics.2014.02.004] [Citation(s) in RCA: 123] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 02/06/2014] [Accepted: 02/10/2014] [Indexed: 11/28/2022]
Abstract
The efficient selection of behaviorally relevant objects from cluttered environments supports our everyday goals. Attentional selection has typically been studied in search tasks involving artificial and simplified displays. Although these studies have revealed important basic principles of attention, they do not explain how the brain efficiently selects familiar objects in complex and meaningful real-world scenes. Findings from recent neuroimaging studies indicate that real-world search is mediated by 'what' and 'where' attentional templates that are implemented in high-level visual cortex. These templates represent target-diagnostic properties and likely target locations, respectively, and are shaped by object familiarity, scene context, and memory. We propose a framework for real-world search that incorporates these recent findings and specifies directions for future study.
Collapse
Affiliation(s)
- Marius V Peelen
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068 Rovereto (TN), Italy.
| | - Sabine Kastner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| |
Collapse
|
187
|
Linsley D, MacEvoy SP. Encoding-Stage Crosstalk Between Object- and Spatial Property-Based Scene Processing Pathways. Cereb Cortex 2014; 25:2267-81. [PMID: 24610116 DOI: 10.1093/cercor/bhu034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Scene categorization draws on 2 information sources: The identities of objects scenes contain and scenes' intrinsic spatial properties. Because these resources are formally independent, it is possible for them to leads to conflicting judgments of scene category. We tested the hypothesis that the potential for such conflicts is mitigated by a system of "crosstalk" between object- and spatial layout-processing pathways, under which the encoded spatial properties of scenes are biased by scenes' object contents. Specifically, we show that the presence of objects strongly associated with a given scene category can bias the encoded spatial properties of scenes containing them toward the average of that category, an effect which is evident both in behavioral measures of scenes' perceived spatial properties and in scene-evoked multivoxel patterns recorded with functional magnetic resonance imaging from the parahippocampal place area (PPA), a region associated with the processing of scenes' spatial properties. These results indicate that harmonization of object- and spatial property-based estimates of scene identity begins when spatial properties are encoded, and that the PPA plays a central role in this process.
Collapse
Affiliation(s)
- Drew Linsley
- Department of Psychology, Boston College, Chestnut Hill, MA 02467, USA
| | - Sean P MacEvoy
- Department of Psychology, Boston College, Chestnut Hill, MA 02467, USA
| |
Collapse
|
188
|
Malcolm GL, Nuthmann A, Schyns PG. Beyond gist: strategic and incremental information accumulation for scene categorization. Psychol Sci 2014; 25:1087-97. [PMID: 24604146 PMCID: PMC4232276 DOI: 10.1177/0956797614522816] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing.
Collapse
|
189
|
Gajewski DA, Philbeck JW, Wirtz PW, Chichka D. Angular declination and the dynamic perception of egocentric distance. J Exp Psychol Hum Percept Perform 2014; 40:361-77. [PMID: 24099588 PMCID: PMC4140626 DOI: 10.1037/a0034394] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Collapse
Affiliation(s)
| | | | - Philip W. Wirtz
- Department of Psychology, The George Washington University
- Department of Decision Sciences, The George Washington University
| | - David Chichka
- Department of Mechanical and Aerospace Engineering, The George Washington University
| |
Collapse
|
190
|
Patterson G, Xu C, Su H, Hays J. The SUN Attribute Database: Beyond Categories for Deeper Scene Understanding. Int J Comput Vis 2014. [DOI: 10.1007/s11263-013-0695-z] [Citation(s) in RCA: 217] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
191
|
Park S, Konkle T, Oliva A. Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain. ACTA ACUST UNITED AC 2014; 25:1792-805. [PMID: 24436318 DOI: 10.1093/cercor/bht418] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Estimating the size of a space and its degree of clutter are effortless and ubiquitous tasks of moving agents in a natural environment. Here, we examine how regions along the occipital-temporal lobe respond to pictures of indoor real-world scenes that parametrically vary in their physical "size" (the spatial extent of a space bounded by walls) and functional "clutter" (the organization and quantity of objects that fill up the space). Using a linear regression model on multivoxel pattern activity across regions of interest, we find evidence that both properties of size and clutter are represented in the patterns of parahippocampal cortex, while the retrosplenial cortex activity patterns are predominantly sensitive to the size of a space, rather than the degree of clutter. Parametric whole-brain analyses confirmed these results. Importantly, this size and clutter information was represented in a way that generalized across different semantic categories. These data provide support for a property-based representation of spaces, distributed across multiple scene-selective regions of the cerebral cortex.
Collapse
Affiliation(s)
- Soojin Park
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
192
|
Abstract
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.
Collapse
Affiliation(s)
- Piers D. L. Howe
- School of Psychological Sciences, University of Melbourne, Parkville, Victoria, Australia
- * E-mail:
| | - Margaret E. Webb
- School of Psychological Sciences, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
193
|
Fleming RW. Visual perception of materials and their properties. Vision Res 2014; 94:62-75. [DOI: 10.1016/j.visres.2013.11.004] [Citation(s) in RCA: 113] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Revised: 11/12/2013] [Accepted: 11/16/2013] [Indexed: 10/26/2022]
|
194
|
Affiliation(s)
- Gregory J Zelinsky
- Departments of Psychology and Computer Science, Stony Brook University Stony Brook, NY, USA ; Center for Interdisciplinary Research (ZiF), University of Bielefeld Bielefeld, Germany
| |
Collapse
|
195
|
Groen II, Ghebreab S, Prins H, Lamme VA, Scholte HS. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category. J Neurosci 2013; 33:18814-24. [PMID: 24285888 PMCID: PMC6618700 DOI: 10.1523/jneurosci.3128-13.2013] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Revised: 10/07/2013] [Accepted: 10/24/2013] [Indexed: 11/21/2022] Open
Abstract
The visual system processes natural scenes in a split second. Part of this process is the extraction of "gist," a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100-150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness.
Collapse
Affiliation(s)
- Iris I.A. Groen
- Cognitive Neuroscience Group, Department of Psychology
- Amsterdam Center for Brain and Cognition, Institute for Interdisciplinary Studies, and
| | - Sennay Ghebreab
- Amsterdam Center for Brain and Cognition, Institute for Interdisciplinary Studies, and
- Intelligent Systems Laboratory Amsterdam, Institute of Informatics, University of Amsterdam, 1018 WS, Amsterdam, The Netherlands
| | - Hielke Prins
- Amsterdam Center for Brain and Cognition, Institute for Interdisciplinary Studies, and
| | | | - H. Steven Scholte
- Cognitive Neuroscience Group, Department of Psychology
- Amsterdam Center for Brain and Cognition, Institute for Interdisciplinary Studies, and
| |
Collapse
|
196
|
Abstract
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.
Collapse
|
197
|
Shakespeare TJ, Yong KXX, Frost C, Kim LG, Warrington EK, Crutch SJ. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns. Front Hum Neurosci 2013; 7:621. [PMID: 24106469 PMCID: PMC3788344 DOI: 10.3389/fnhum.2013.00621] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Accepted: 09/09/2013] [Indexed: 12/16/2022] Open
Abstract
Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.
Collapse
Affiliation(s)
- Timothy J Shakespeare
- Dementia Research Centre, Institute of Neurology, University College London London, UK
| | | | | | | | | | | |
Collapse
|
198
|
Ali MM, Fayek MB, Hemayed EE. Human-inspired features for natural scene classification. Pattern Recognit Lett 2013. [DOI: 10.1016/j.patrec.2013.06.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
199
|
Gygi B, Shafiro V. Auditory and cognitive effects of aging on perception of environmental sounds in natural auditory scenes. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1373-1388. [PMID: 23926291 PMCID: PMC3839956 DOI: 10.1044/1092-4388(2013/12-0283)] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study investigated how age and high-frequency audibility affect this Incongruency Advantage (IA) effect. METHOD In Experiments 1a and 1b, elderly listeners ( N = 18 for 1a; N = 10 for 1b) with age-appropriate hearing (EAH) were tested on target sounds and auditory scenes in 5 sound-to-scene ratios (So/Sc) between -3 and -18 dB. Experiment 2 tested 11 YNH on the same sound-scene pairings lowpass-filtered at 4 kHz (YNH-4k). RESULTS The EAH and YNH-4k groups exhibited an almost identical pattern of significant IA effects, but both were at approximately 3.9 dB higher So/Sc than the previously tested YNH listeners. However, the psychometric functions revealed a shallower slope for EAH listeners compared with YNH listeners for the congruent stimuli only, suggesting a greater difficulty for the EAH listeners in attending to sounds expected to occur in a scene. CONCLUSIONS These findings indicate that semantic relationships between environmental sounds in soundscapes are mediated by both audibility and cognitive factors and suggest a method for dissociating these factors.
Collapse
|
200
|
Hall FM. Gestalt of Medical Images. Radiographics 2013; 33:1519. [DOI: 10.1148/rg.335135016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|