1
|
Roth N, Rolfs M, Hellwich O, Obermayer K. Objects guide human gaze behavior in dynamic real-world scenes. PLoS Comput Biol 2023; 19:e1011512. [PMID: 37883331 PMCID: PMC10602265 DOI: 10.1371/journal.pcbi.1011512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/12/2023] [Indexed: 10/28/2023] Open
Abstract
The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model's parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing.
Collapse
Affiliation(s)
- Nicolas Roth
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
| | - Martin Rolfs
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| | - Olaf Hellwich
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Computer Engineering and Microelectronics, Technische Universität Berlin, Germany
| | - Klaus Obermayer
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
2
|
Cavanagh P, Caplovitz GP, Lytchenko TK, Maechler MR, Tse PU, Sheinberg DL. The Architecture of Object-Based Attention. Psychon Bull Rev 2023; 30:1643-1667. [PMID: 37081283 DOI: 10.3758/s13423-023-02281-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
The allocation of attention to objects raises several intriguing questions: What are objects, how does attention access them, what anatomical regions are involved? Here, we review recent progress in the field to determine the mechanisms underlying object-based attention. First, findings from unconscious priming and cueing suggest that the preattentive targets of object-based attention can be fully developed object representations that have reached the level of identity. Next, the control of object-based attention appears to come from ventral visual areas specialized in object analysis that project downward to early visual areas. How feedback from object areas can accurately target the object's specific locations and features is unknown but recent work in autoencoding has made this plausible. Finally, we suggest that the three classic modes of attention may not be as independent as is commonly considered, and instead could all rely on object-based attention. Specifically, studies show that attention can be allocated to the separated members of a group-without affecting the space between them-matching the defining property of feature-based attention. At the same time, object-based attention directed to a single small item has the properties of space-based attention. We outline the architecture of object-based attention, the novel predictions it brings, and discuss how it works in parallel with other attention pathways.
Collapse
Affiliation(s)
- Patrick Cavanagh
- Department of Psychology, Glendon College, 2275 Bayview Avenue, North York, ON, M4N 3M6, Canada.
- CVR, York University, Toronto, ON, Canada.
| | | | | | | | | | - David L Sheinberg
- Department of Neuroscience, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
3
|
Huang L, Chen Y, Shen S, Ye H, Ou S, Zhang X. Awareness-independent gradual spread of object-based attention. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03875-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractAlthough attention can be directed at certain objects, how object-based attention spreads within an object and whether this spread interacts with awareness remain unclear. Using a modified spatial cuing paradigm with backward masking, we addressed these issues with either visible or invisible displays presenting the real (Experiment 1) and illusory (Experiment 2) U-shaped objects (UOs), whose ends and middles, the possible locations of the cue and target, have iso-eccentric distances from the fixation. These equidistant ends and middles of UOs offered us a unique opportunity to examine whether attention gradually spreads within a given object, i.e., within an UO, attention spreads from its cued-end to uncued-end via the uncued-middle. Despite the visibility (visible or invisible) of UOs, both experiments supported this gradual spread manner by showing a faster response of human participants (male and female) to the target in the uncued-middle than that in the uncued-end. Our results thus indicate a gradual spread of object-based attention and further reveal that this gradual spread is independent of both the “visual objectness” (whether the object is defined as the real or illusory boundaries) and conscious access to objects.
Collapse
|
4
|
Can faces affect object-based attention? Evidence from online experiments. Atten Percept Psychophys 2022; 84:1220-1233. [PMID: 35396617 PMCID: PMC8992784 DOI: 10.3758/s13414-022-02473-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2022] [Indexed: 11/23/2022]
Abstract
This study tested how human faces affect object-based attention (OBA) through two online experiments in a modified double-rectangle paradigm. The results of Experiment 1 revealed that faces did not elicit the OBA effect as non-face objects, which was caused by a longer response time (RT) when attention is focused on faces relative to non-face objects. In addition, by observing faster RTs when attention was engaged horizontally rather than vertically, we found a significant horizontal attention bias, which might override the OBA effect if vertical rectangles were the only items presented; these results were replicated in Experiment 2 (using only vertical rectangles) after directly measuring horizontal bias and excluding its influence on the OBA effect. This study suggested that faces cannot elicit the same-object advantage in the double-rectangle paradigm and provided a method to measure the OBA effect free from horizontal bias.
Collapse
|
5
|
Attention can operate on object representations in visual sensory memory. Atten Percept Psychophys 2021; 83:3069-3085. [PMID: 34036534 DOI: 10.3758/s13414-021-02323-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/26/2021] [Indexed: 11/08/2022]
Abstract
Numerous studies have shown that attention can be allocated to various types of objects, such as low-level objects developed by perceptual organization and high-level objects developed by semantic associations. However, little is known about whether attention can also be affected solely by object representations in the brain, after the disappearance of physical objects. Here, we used a modified double-rectangle paradigm to investigate how attention is affected by object representation in visual sensory memory when the physical objects disappear for a short period of time before the target onset. By manipulating the interstimulus interval (ISI) between the offset of the objects and the onset of the target, an object-based attention effect, with shorter reaction times (RTs) for within-object relative to between-object conditions, was observed in the short-ISI (within 500 ms in Experiments 1a, 1b, 2, and 3) conditions while disappearing in the long-ISI (800 ms in Experiment 4) conditions. This result demonstrated that the mere presence of object representation in visual sensory memory, or the sensory memory-maintained object, can serve as an object unit that attention can operate on. This provides evidence for the relationship between object-based attention and visual sensory memory: object representation in visual sensory memory could affect attentional allocation, or attention can operate on a sensory memory-maintained object.
Collapse
|
6
|
Carter O, van Swinderen B, Leopold DA, Collin S, Maier A. Perceptual rivalry across animal species. J Comp Neurol 2020; 528:3123-3133. [PMID: 32361986 PMCID: PMC7541519 DOI: 10.1002/cne.24939] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 04/18/2020] [Accepted: 04/20/2020] [Indexed: 01/10/2023]
Abstract
This review in memoriam of Jack Pettigrew provides an overview of past and current research into the phenomenon of multistable perception across multiple animal species. Multistable perception is characterized by two or more perceptual interpretations spontaneously alternating, or rivaling, when animals are exposed to stimuli with inherent sensory ambiguity. There is a wide array of ambiguous stimuli across sensory modalities, ranging from the configural changes observed in simple line drawings, such as the famous Necker cube, to the alternating perception of entire visual scenes that can be instigated by interocular conflict. The latter phenomenon, called binocular rivalry, in particular caught the attention of the late Jack Pettigrew, who combined his interest in the neuronal basis of perception with a unique comparative biological approach that considered ambiguous sensation as a fundamental problem of sensory systems that has shaped the brain throughout evolution. Here, we examine the research findings on visual perceptual alternation and suppression in a wide variety of species including insects, fish, reptiles, and primates. We highlight several interesting commonalities across species and behavioral indicators of perceptual alternation. In addition, we show how the comparative approach provides new avenues for understanding how the brain suppresses opposing sensory signals and generates alternations in perceptual dominance.
Collapse
Affiliation(s)
- Olivia Carter
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville, VIC, AUS
| | | | | | - Shaun Collin
- School of Life Sciences, La Trobe University, Melbourne, VIC, AUS
| | - Alex Maier
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
7
|
Han S, Alais D, Palmer C. Dynamic face mask enhances continuous flash suppression. Cognition 2020; 206:104473. [PMID: 33080453 DOI: 10.1016/j.cognition.2020.104473] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 09/13/2020] [Accepted: 09/20/2020] [Indexed: 11/26/2022]
Abstract
In continuous flash suppression (CFS), an image presented to one eye is suppressed from awareness by a dynamic image masker presented to the other eye. Previous studies report that face stimuli break out of CFS more readily when they are oriented upright and contain ecologically relevant information such as facial expressions or direct eye gaze, potentially implicating face processing in the mechanisms of interocular competition. It is unknown, however, whether face content helps to drive interocular suppression when incorporated into the dynamic masker itself, either by engaging higher-level visual mechanisms that underlie face detection or due to lower-level image features that the faces happen to contain. To investigate this, we devised a dynamic mask composed of upright faces and tested how well it suppressed detection of face or grating targets presented to the other eye. Relative contributions of higher-level and lower-level features were compared by manipulating the image properties of the mask. Results show that the dynamic face mask is strikingly effective at suppressing sensory input presented to the opposing eye, but its effectiveness is largely attributable to image texture, which can be quantified in terms of image entropy and edge density. This is because strong suppression was still observed following phase-scrambling or spatial inversion of the face elements, and while a target-selective effect was observed for the face mask, inverting the face elements to interfere with configural processing did not significantly diminish this effect. Thus, visual properties of faces, such as their image entropy and complex phase structure, predominate in driving interocular suppression rather than face detection per se.
Collapse
Affiliation(s)
- Shui'er Han
- School of Psychology, University of Sydney, Sydney, Australia.
| | - David Alais
- School of Psychology, University of Sydney, Sydney, Australia
| | - Colin Palmer
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia
| |
Collapse
|
8
|
Man K, Melo G, Damasio A, Kaplan J. Seeing objects improves our hearing of the sounds they make. Neurosci Conscious 2020; 2020:niaa014. [PMID: 32793393 PMCID: PMC7415264 DOI: 10.1093/nc/niaa014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 04/28/2020] [Accepted: 05/02/2020] [Indexed: 11/12/2022] Open
Abstract
It has been established that lip reading improves the perception of auditory speech. But does seeing objects themselves help us hear better the sounds they make? Here we report a series of psychophysical experiments in humans showing that the visual enhancement of auditory sensitivity is not confined to speech. We further show that the crossmodal enhancement was associated with the conscious visualization of the stimulus: we can better hear the sounds an object makes when we are conscious of seeing that object. Our work extends an intriguing crossmodal effect, previously circumscribed to speech, to a wider domain of real-world objects, and suggests that consciousness contributes to this effect.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA 90089, USA
| | - Gabriela Melo
- Institute of Basic Health Sciences, Federal University of Rio Grande do Sul, 500 Sarmento Leite Street, Porto Alegre, RS, 90050-170, Brazil
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA 90089, USA
| | - Jonas Kaplan
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA 90089, USA
| |
Collapse
|
9
|
The Relationship between Biological Motion-Based Visual Consciousness and Attention: An Electroencephalograph Study. Neuroscience 2019; 415:230-240. [PMID: 31301367 DOI: 10.1016/j.neuroscience.2019.06.040] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/22/2019] [Accepted: 06/27/2019] [Indexed: 11/24/2022]
Abstract
Understanding and predicting the intentions of others through limb movements are vital to social interaction. The processing of biological motion is unique from the processing of motion of inanimate objects. Presently, there is controversy over whether visual consciousness of biological motion is regulated by visual attention. In addition, the neural mechanisms involved in biological motion-related visual awareness are not known. In the current study, the relationship between visual awareness (aware vs unaware), represented by a point-light walker and biological-motion-based attention, manipulated by a difference in congruence (congruent, incongruent) between the direction of a pre-cue and that of biological motion was explored. The neural mechanisms involved in processing the stimuli were explored through electroencephalography. Both early (50-150 ms, 100-200 ms, and 174-226 ms after target presentation) and late (350-550 ms after target presentation) awareness-related neural processings were observed during a biological motion-based congruency task. Early processing was localized to occipital-parietal regions, such as the left postcentral gyrus, the left middle occipital gyrus, and the right precentral gyrus. In the 174-226-ms window, the activity in the occipital region was gradually replaced by activity in the parietal and frontal regions. Late processing was localized to frontal-parietal regions, such as the right dorsal superior frontal gyrus, the left medial superior frontal gyrus, and the occipito-temporal regions. Congruency-related processing occurred in the 246-260-ms window and was localized to the right superior occipital gyrus. In summary, due to its complexity, biological motion awareness has a unique neural basis.
Collapse
|
10
|
Noah S, Mangun GR. Recent evidence that attention is necessary, but not sufficient, for conscious perception. Ann N Y Acad Sci 2019; 1464:52-63. [PMID: 30883785 DOI: 10.1111/nyas.14030] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 12/27/2018] [Accepted: 01/25/2019] [Indexed: 11/26/2022]
Abstract
Early descriptions of attention in the psychological literature highlighted its interdependence with conscious awareness. As the study of attention developed, consciousness and attention began to be considered separable phenomena, experimentally and theoretically. In recent years, an energetic debate has developed concerning the extent to which the two phenomena are related. One school of thought considers the two to be doubly dissociable, whereas the other considers them to be necessarily linked. In this review, we highlight experimental findings from the last 5 years that contribute to the leading consensus view: attention is necessary, but not sufficient, for conscious perception. We review studies that show attention operating in conjunction with unconscious information, and other evidence linking attention necessarily to conscious perception. By drawing upon evidence that attention comprises many cognitive and neural processes, we argue that by studying how different forms of attention are related to conscious perception, it is possible to gain new insights about the neural states or processes that are necessary for conscious perception to occur.
Collapse
Affiliation(s)
- Sean Noah
- Department of Psychology, and Center for Mind and Brain, University of California, Davis, California
| | - George R Mangun
- Department of Psychology, and Center for Mind and Brain, University of California, Davis, California
| |
Collapse
|
11
|
Chou WL, Yeh SL. Dissociating location-based and object-based cue validity effects in object-based attention. Vision Res 2018; 143:34-41. [DOI: 10.1016/j.visres.2017.11.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2014] [Revised: 09/11/2017] [Accepted: 11/11/2017] [Indexed: 10/18/2022]
|
12
|
Zhang X, Mlynaryk N, Japee S, Ungerleider LG. Attentional selection of multiple objects in the human visual system. Neuroimage 2017; 163:231-243. [PMID: 28951352 PMCID: PMC5774655 DOI: 10.1016/j.neuroimage.2017.09.050] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 09/18/2017] [Accepted: 09/22/2017] [Indexed: 11/28/2022] Open
Abstract
Classic theories of object-based attention assume a single object of selection but real-world tasks, such as driving a car, often require attending to multiple objects simultaneously. However, whether object-based attention can operate on more than one object at a time remains unexplored. Here, we used functional magnetic resonance imaging (fMRI) to address this question as human participants performed object-based attention tasks that required simultaneous attention to two objects differing in either their features or locations. Simultaneous attention to two objects differing in features (face and house) did not show significantly different responses in the fusiform face area (FFA) or parahippocampal place area (PPA), respectively, compared to attending a single object (face or house), but did enhance the response in the inferior frontal gyrus (IFG). Simultaneous attention to two circular arcs differing in locations did not show significantly different responses in the primary visual cortex (V1) compared to attending a single circular arc, but did enhance the response in the intraparietal sulcus (IPS). These results suggest that object-based attention can simultaneously select at least two objects differing in their features or locations, processes mediated by the frontal and parietal cortex, respectively.
Collapse
Affiliation(s)
- Xilin Zhang
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Nicole Mlynaryk
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Leslie G Ungerleider
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
13
|
The availability of attentional resources modulates the inhibitory strength related to weakly activated priming. Atten Percept Psychophys 2016; 78:1655-64. [PMID: 27198916 DOI: 10.3758/s13414-016-1131-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study investigated the role of attention in inhibitory processes (the inhibitory processes described in the current study refer only to those associated with masked or flanked priming) using a mixed paradigm involving the negative compatibility effect (NCE) and object-based attention. Accumulating evidence suggests that attention can be spread more easily within the same object, which increases the availability of attentional resources, than across different objects. Accordingly, we manipulated distractor location (with primes presented in the same object versus presented in different objects) together with prime/target compatibility (compatible versus incompatible) and prime-distractor stimulus onset asynchrony (SOA, 23 ms vs 70 ms). The aim was to investigate whether inhibitory processes related to weakly activated priming, which have been previously assumed to be automatic, depend on the availability of attentional resources. The results of Experiment 1 showed a significant NCE for the 70-ms SOA when the prime and distractor were presented in the same object (greater attentional resource availability); however, reversed NCEs were obtained for all other conditions. Experiment 2 was designed to disentangle whether the results of Experiment 1 were affected by the prime position, and the results indicated that the prime position did not modulate the NCE in Experiment 1. Together, these results are consistent with the claim that the availability of attentional resources modulates the inhibitory strength related to weakly activated priming. Specifically, if attentional resources are assigned to the distractor when it is presented in the same object as the prime, the strength of the inhibition elicited by the distractor may increase and reverse the activation elicited by the prime, which could lead to a significant NCE.
Collapse
|
14
|
Lin SY, Yeh SL. Interocular grouping without awareness. Vision Res 2016; 121:23-30. [PMID: 26851342 DOI: 10.1016/j.visres.2016.01.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 01/16/2016] [Accepted: 01/17/2016] [Indexed: 10/22/2022]
Abstract
Interocular grouping occurs when different parts of an image presented to each eye bound into a coherent whole. Previous studies anticipated that these parts are visible to both eyes simultaneously (i.e., the images altered back and forth). Although this view is consistent with the general consensus of binocular rivalry (BR) that suppressed stimuli receive no processing beyond rudimentary level (i.e., adaptation), it is actually inconsistent with studies that use continuous flash suppression (CFS). CFS is a form of interocular suppression that is more stable and causes stronger suppression of stimuli than BR. In the present study, we examined whether or not interocular grouping needs to occur at a conscious level as prior studies suggested. The modified double-rectangle paradigm used by Egly, Driver, and Rafal (1994) was adopted, and object-based attention was directed for successful grouping. To induce interocular grouping, we presented complementary parts of two rectangles dichoptically for possible interocular grouping and a dynamic Mondrian in front of one eye (i.e., CFS). Two concurrent targets were presented after one of the visible parts of the rectangles was cued. Participants were asked to judge which target appeared first. We found that the target showed on the cued rectangle after interocular grouping was reported to appear first more frequently than the target on the uncued rectangle. This result was based on the majority of trials where the suppressed parts of the objects remained invisible, which indicates that interocular grouping can occur without all the to-be-grouped parts being visible and without awareness.
Collapse
Affiliation(s)
- San-Yuan Lin
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei, Taiwan; Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
15
|
Abstract
The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41-51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161-177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention.
Collapse
|
16
|
Schmid MC, Maier A. To see or not to see--thalamo-cortical networks during blindsight and perceptual suppression. Prog Neurobiol 2015; 126:36-48. [PMID: 25661166 DOI: 10.1016/j.pneurobio.2015.01.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Revised: 01/05/2015] [Accepted: 01/14/2015] [Indexed: 11/15/2022]
Abstract
Even during moments when we fail to be fully aware of our environment, our brains never go silent. Instead, it appears that the brain can also operate in an alternate, unconscious mode. Delineating unconscious from conscious neural processes is a promising first step toward investigating how awareness emerges from brain activity. Here we focus on recent insights into the neuronal processes that contribute to visual function in the absence of a conscious visual percept. Drawing on insights from findings on the phenomenon of blindsight that results from injury to primary visual cortex and the results of experimentally induced perceptual suppression, we describe what kind of visual information the visual system analyzes unconsciously and we discuss the neuronal routing and responses that accompany this process. We conclude that unconscious processing of certain visual stimulus attributes, such as the presence of visual motion or the emotional expression of a face can occur in a geniculo-cortical circuit that runs independent from and in parallel to the predominant route through primary visual cortex. We speculate that in contrast, bidirectional neuronal interactions between cortex and the thalamic pulvinar nucleus that support large-scale neuronal integration and visual awareness are impeded during blindsight and perceptual suppression.
Collapse
Affiliation(s)
- Michael C Schmid
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Deutschordenstraße 46, Frankfurt a. M. 60528, Germany.
| | - Alexander Maier
- Vanderbilt University, Department of Psychology, 111 21st Avenue South, 301 Wilson Hall, Nashville, TN 37240, USA.
| |
Collapse
|
17
|
Chow HM, Tseng CH. Invisible collinear structures impair search. Conscious Cogn 2014; 31:46-59. [PMID: 25460240 DOI: 10.1016/j.concog.2014.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2013] [Revised: 09/09/2014] [Accepted: 10/10/2014] [Indexed: 11/30/2022]
Abstract
Visual attention and perceptual grouping both help us from being overloaded by the vast amount of information, and attentional search is delayed when a target overlaps with a snake-like collinear distractor (Jingling & Tseng, 2013). We assessed whether awareness of the collinear distractor is required for this modulation. We first identified that visible long (=9 elements), but not short (=3 elements) collinear distractor slowed observers' detection of an overlapping target. Then we masked part of a long distractor (=9 elements) with continuous flashing color patches (=6 elements) so that the combined dichoptic percept to observers' awareness was a short collinear distractor (=3 elements). We found that the invisible collinear parts, like visible ones, can form a continuous contour to impair search, suggesting that conscious awareness is not a pre-requisite for contour integration and its interaction with selective attention.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, The University of Hong Kong, Hong Kong
| | - Chia-huei Tseng
- Department of Psychology, The University of Hong Kong, Hong Kong; Department of Psychology, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
18
|
|
19
|
Abstract
Resolution of perceptual ambiguity is one function of cross-modal interactions. Here we investigate whether auditory and tactile stimuli can influence binocular rivalry generated by interocular temporal conflict in human subjects. Using dichoptic visual stimuli modulating at different temporal frequencies, we added modulating sounds or vibrations congruent with one or the other visual temporal frequency. Auditory and tactile stimulation both interacted with binocular rivalry by promoting dominance of the congruent visual stimulus. This effect depended on the cross-modal modulation strength and was absent when modulation depth declined to 33%. However, when auditory and tactile stimuli that were too weak on their own to bias binocular rivalry were combined, their influence over vision was very strong, suggesting the auditory and tactile temporal signals combined to influence vision. Similarly, interleaving discrete pulses of auditory and tactile stimuli also promoted dominance of the visual stimulus congruent with the supramodal frequency. When auditory and tactile stimuli were presented at maximum strength, but in antiphase, they had no influence over vision for low temporal frequencies, a null effect again suggesting audio-tactile combination. We also found that the cross-modal interaction was frequency-sensitive at low temporal frequencies, when information about temporal phase alignment can be perceptually tracked. These results show that auditory and tactile temporal processing is functionally linked, suggesting a common neural substrate for the two sensory modalities and that at low temporal frequencies visual activity can be synchronized by a congruent cross-modal signal in a frequency-selective way, suggesting the existence of a supramodal temporal binding mechanism.
Collapse
|
20
|
Unmasking the dichoptic mask by sound: spatial congruency matters. Exp Brain Res 2014; 232:1109-16. [PMID: 24449005 DOI: 10.1007/s00221-014-3820-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Accepted: 01/02/2014] [Indexed: 10/25/2022]
Abstract
People tend to look toward where a sound occurs; however, the role of spatial congruency between sound and sight in the effect of sound facilitation on visual detection remains controversial. We propose that the role of spatial congruency depends on the reliability of the information provided by the facilitator; if it is relatively unreliable, adding spatially congruent information can help to unify different sensory inputs to compensate for this unreliability. To test this, we examine the influence of sound location on visual detection with a non-temporal task, presumably unfavorable for sound since it is better for temporal resolution, and predict that spatial congruency should matter in this situation. We used the continuous flash suppression paradigm that makes the visual stimuli invisible to keep the relationship of sound and sight opaque. The sound is on the same depth plane as the visual stimulus (the congruent condition) or on a different plane (the incongruent condition). The target was presented to one eye with luminance contrast gradually increased and continuously masked by flashed Mondrian masks presented to the other eye until the target was released from suppression. We found that sound facilitated visual detection (measured by released-from-suppression time) in the spatially congruent condition but not in the spatially incongruent condition. Together with previous findings in the literature, it is suggested that both task type and modality determine the reliability of the information for multisensory integration and thus determine whether spatial congruency is critical.
Collapse
|
21
|
Lupyan G, Ward EJ. Language can boost otherwise unseen objects into visual awareness. Proc Natl Acad Sci U S A 2013; 110:14196-201. [PMID: 23940323 PMCID: PMC3761589 DOI: 10.1073/pnas.1303312110] [Citation(s) in RCA: 162] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Linguistic labels (e.g., "chair") seem to activate visual properties of the objects to which they refer. Here we investigated whether language-based activation of visual representations can affect the ability to simply detect the presence of an object. We used continuous flash suppression to suppress visual awareness of familiar objects while they were continuously presented to one eye. Participants made simple detection decisions, indicating whether they saw any image. Hearing a verbal label before the simple detection task changed performance relative to an uninformative cue baseline. Valid labels improved performance relative to no-label baseline trials. Invalid labels decreased performance. Labels affected both sensitivity (d') and response times. In addition, we found that the effectiveness of labels varied predictably as a function of the match between the shape of the stimulus and the shape denoted by the label. Together, the findings suggest that facilitated detection of invisible objects due to language occurs at a perceptual rather than semantic locus. We hypothesize that when information associated with verbal labels matches stimulus-driven activity, language can provide a boost to perception, propelling an otherwise invisible image into awareness.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI 53706, USA.
| | | |
Collapse
|
22
|
Zhang X, Fang F. Object-based attention guided by an invisible object. Exp Brain Res 2012; 223:397-404. [PMID: 22990295 DOI: 10.1007/s00221-012-3268-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2012] [Accepted: 09/09/2012] [Indexed: 11/25/2022]
Abstract
Evidence for object-based attention typically comes from studies using displays with visible objects, and little is known about whether object-based attention can occur with invisible objects. We investigated this issue with a modified double-rectangle cuing paradigm, which was originally developed by Egly et al. (J Exp Psychol Gen 123:161-177, 1994). In this study, low-contrast rectangles were presented very briefly, which rendered them invisible to subjects. With the invisible rectangles, we found a classical object-based attentional effect as indexed by the same-object effect. We also found the instantaneous object effect-object-based attention was dependent on the orientation of the rectangles presented with the target, providing evidence for the dynamic updating hypothesis (Ho and Yeh in Acta Psychol 132:31-39, 2009). These results suggest that object-based attention can be guided by an invisible object in an automatic way, with a minimal influence from high-level top-down control.
Collapse
Affiliation(s)
- Xilin Zhang
- Department of Psychology and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871, People's Republic of China
| | | |
Collapse
|