1
|
Schoffelen JM, Pesci UG, Noppeney U. Alpha Oscillations and Temporal Binding Windows in Perception-A Critical Review and Best Practice Guidelines. J Cogn Neurosci 2024; 36:655-690. [PMID: 38330177 DOI: 10.1162/jocn_a_02118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
An intriguing question in cognitive neuroscience is whether alpha oscillations shape how the brain transforms the continuous sensory inputs into distinct percepts. According to the alpha temporal resolution hypothesis, sensory signals arriving within a single alpha cycle are integrated, whereas those in separate cycles are segregated. Consequently, shorter alpha cycles should be associated with smaller temporal binding windows and higher temporal resolution. However, the evidence supporting this hypothesis is contentious, and the neural mechanisms remain unclear. In this review, we first elucidate the alpha temporal resolution hypothesis and the neural circuitries that generate alpha oscillations. We then critically evaluate study designs, experimental paradigms, psychophysics, and neurophysiological analyses that have been employed to investigate the role of alpha frequency in temporal binding. Through the lens of this methodological framework, we then review evidence from between-subject, within-subject, and causal perturbation studies. Our review highlights the inherent interpretational ambiguities posed by previous study designs and experimental paradigms and the extensive variability in analysis choices across studies. We also suggest best practice recommendations that may help to guide future research. To establish a mechanistic role of alpha frequency in temporal parsing, future research is needed that demonstrates its causal effects on the temporal binding window with consistent, experimenter-independent methods.
Collapse
Affiliation(s)
| | | | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University
| |
Collapse
|
2
|
White PA. The perceptual timescape: Perceptual history on the sub-second scale. Cogn Psychol 2024; 149:101643. [PMID: 38452720 DOI: 10.1016/j.cogpsych.2024.101643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/09/2024]
Abstract
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, Wales CF10 3YG, United Kingdom.
| |
Collapse
|
3
|
White PA. Time marking in perception. Neurosci Biobehav Rev 2023; 146:105043. [PMID: 36642288 DOI: 10.1016/j.neubiorev.2023.105043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 12/21/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Several authors have proposed that perceptual information carries labels that identify temporal features, including time of occurrence, ordinal temporal relations, and brief durations. These labels serve to locate and organise perceptual objects, features, and events in time. In some proposals time marking has local, specific functions such as synchronisation of different features in perceptual processing. In other proposals time marking has general significance and is responsible for rendering perceptual experience temporally coherent, just as various forms of spatial information render the visual environment spatially coherent. These proposals, which all concern time marking on the millisecond time scale, are reviewed. It is concluded that time marking is vital to the construction of a multisensory perceptual world in which things are orderly with respect to both space and time, but that much more research is needed to ascertain its functions in perception and its neurophysiological foundations.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3YG, Wales, UK.
| |
Collapse
|
4
|
Ramstead MJD, Seth AK, Hesp C, Sandved-Smith L, Mago J, Lifshitz M, Pagnoni G, Smith R, Dumas G, Lutz A, Friston K, Constant A. From Generative Models to Generative Passages: A Computational Approach to (Neuro) Phenomenology. REVIEW OF PHILOSOPHY AND PSYCHOLOGY 2022; 13:829-857. [PMID: 35317021 PMCID: PMC8932094 DOI: 10.1007/s13164-021-00604-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 10/28/2021] [Indexed: 12/16/2022]
Abstract
This paper presents a version of neurophenomenology based on generative modelling techniques developed in computational neuroscience and biology. Our approach can be described as computational phenomenology because it applies methods originally developed in computational modelling to provide a formal model of the descriptions of lived experience in the phenomenological tradition of philosophy (e.g., the work of Edmund Husserl, Maurice Merleau-Ponty, etc.). The first section presents a brief review of the overall project to naturalize phenomenology. The second section presents and evaluates philosophical objections to that project and situates our version of computational phenomenology with respect to these projects. The third section reviews the generative modelling framework. The final section presents our approach in detail. We conclude by discussing how our approach differs from previous attempts to use generative modelling to help understand consciousness. In summary, we describe a version of computational phenomenology which uses generative modelling to construct a computational model of the inferential or interpretive processes that best explain this or that kind of lived experience.
Collapse
Affiliation(s)
- Maxwell J. D. Ramstead
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- VERSES Research Lab and Spatial Web Foundation, Los Angeles, California USA
| | - Anil K. Seth
- School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ UK
- Canadian Institute for Advanced Research (CIFAR), Program on Brain, Mind, and Consciousness, Toronto, Ontario, M5G 1M1 Canada
| | - Casper Hesp
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Department of Psychology, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, Netherlands
- Amsterdam Brain and Cognition Centre, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, Netherlands
- Institute for Advanced Study, University of Amsterdam, Oude Turfmarkt 147, 1012 GC Amsterdam, Netherlands
| | - Lars Sandved-Smith
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France
| | - Jonas Mago
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Integrated Program in Neuroscience, Department of Neuroscience, McGill University, Montreal, Canada
- Division of Social and Transcultural Psychiatry, McGill University, Montreal, Canada
| | - Michael Lifshitz
- Division of Social and Transcultural Psychiatry, McGill University, Montreal, Canada
- Lady Davis Institute for Medical Research, Montreal Jewish General Hospital, Montreal, Canada
| | - Giuseppe Pagnoni
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
- Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Modena, Italy
| | - Ryan Smith
- Laureate Institute for Brain Research, Tulsa, Oklahoma USA
| | - Guillaume Dumas
- CHU Sainte-Justine Research Center, Department of Psychiatry, University of Montreal, Montreal, Canada
- Mila – Quebec Artificial Intelligence Institute, University of Montreal, Montreal, Canada
| | - Antoine Lutz
- Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- VERSES Research Lab and Spatial Web Foundation, Los Angeles, California USA
| | - Axel Constant
- Charles Perkins Centre, The University of Sydney, Sydney, Australia
| |
Collapse
|
5
|
|
6
|
Garg U, Yang K, Sengupta A. Emulation of Astrocyte Induced Neural Phase Synchrony in Spin-Orbit Torque Oscillator Neurons. Front Neurosci 2021; 15:699632. [PMID: 34712110 PMCID: PMC8546188 DOI: 10.3389/fnins.2021.699632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 08/25/2021] [Indexed: 12/04/2022] Open
Abstract
Astrocytes play a central role in inducing concerted phase synchronized neural-wave patterns inside the brain. In this article, we demonstrate that injected radio-frequency signal in underlying heavy metal layer of spin-orbit torque oscillator neurons mimic the neuron phase synchronization effect realized by glial cells. Potential application of such phase coupling effects is illustrated in the context of a temporal "binding problem." We also present the design of a coupled neuron-synapse-astrocyte network enabled by compact neuromimetic devices by combining the concepts of local spike-timing dependent plasticity and astrocyte induced neural phase synchrony.
Collapse
Affiliation(s)
- Umang Garg
- School of Electrical Engineering and Computer Science, Department of Materials Science and Engineering, The Pennsylvania State University, University Park, PA, United States
- Department of Electronics and Instrumentation Engineering, Birla Institute of Technology and Science, Pilani, India
| | - Kezhou Yang
- School of Electrical Engineering and Computer Science, Department of Materials Science and Engineering, The Pennsylvania State University, University Park, PA, United States
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, Department of Materials Science and Engineering, The Pennsylvania State University, University Park, PA, United States
| |
Collapse
|
7
|
Vieweg P, Müller MM. Shifting Attention in Feature Space: Fast Facilitation of the To-Be-Attended Feature Is Followed by Slow Inhibition of the To-Be-Ignored Feature. J Cogn Neurosci 2020; 33:651-661. [PMID: 33378245 DOI: 10.1162/jocn_a_01669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In an explorative study, we investigated the time course of attentional selection shifts in feature-based attention in early visual cortex by means of steady-state visual evoked potentials (SSVEPs). To this end, we presented four flickering random dot kinematograms with red/blue, horizontal/vertical bars, respectively. Given the oscillatory nature of SSVEPs, we were able to investigate neural temporal dynamics of facilitation and inhibition/suppression when participants shifted attention either within (i.e., color to color) or between feature dimensions (i.e., color to orientation). Extending a previous study of our laboratory [Müller, M. M., Trautmann, M., & Keitel, C. Early visual cortex dynamics during top-down modulated shifts of feature-selective attention. Journal of Cognitive Neuroscience, 28, 643-655, 2016] to a full factorial design, we replicated a critical finding of our previous study: Facilitation of color was quickest, regardless of the origin of the shift (from color or orientation). Furthermore, facilitation of the newly to-be-attended and inhibition/suppression of the then to-be-ignored feature is not a time-invariant process that occurs instantaneously, but a biphasic one with longer time delays between the two processes. Interestingly, inhibition/suppression of the to-be-ignored feature after the shifting cue had a much longer latency with between- compared to within-dimensional shifts (by about 130-150 msec). The exploratory nature of our study is reasoned by two limiting factors: (a) Identical to our precursor study, we found no attentional SSVEP amplitude time course modulation for orientation, and (b) the signal-to-noise ratio for single trials was too poor to allow for reliable statistical testing of the latencies that were obtained with running t tests of averaged data.
Collapse
|
8
|
Caplette L, Ince RAA, Jerbi K, Gosselin F. Disentangling presentation and processing times in the brain. Neuroimage 2020; 218:116994. [PMID: 32474082 DOI: 10.1016/j.neuroimage.2020.116994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 05/16/2020] [Accepted: 05/22/2020] [Indexed: 11/30/2022] Open
Abstract
Visual object recognition seems to occur almost instantaneously. However, not only does it require hundreds of milliseconds of processing, but our eyes also typically fixate the object for hundreds of milliseconds. Consequently, information reaching our eyes at different moments is processed in the brain together. Moreover, information received at different moments during fixation is likely to be processed differently, notably because different features might be selectively attended at different moments. Here, we introduce a novel reverse correlation paradigm that allows us to uncover with millisecond precision the processing time course of specific information received on the retina at specific moments. Using faces as stimuli, we observed that processing at several electrodes and latencies was different depending on the moment at which information was received. Some of these variations were caused by a disruption occurring 160-200 ms after the face onset, suggesting a role of the N170 ERP component in gating information processing; others hinted at temporal compression and integration mechanisms. Importantly, the observed differences were not explained by simple adaptation or repetition priming, they were modulated by the task, and they were correlated with differences in behavior. These results suggest that top-down routines of information sampling are applied to the continuous visual input, even within a single eye fixation.
Collapse
Affiliation(s)
- Laurent Caplette
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada.
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Karim Jerbi
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada
| | - Frédéric Gosselin
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada
| |
Collapse
|
9
|
Attention Periodically Binds Visual Features As Single Events Depending on Neural Oscillations Phase-Locked to Action. J Neurosci 2019; 39:4153-4161. [PMID: 30886011 DOI: 10.1523/jneurosci.2494-18.2019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 03/04/2019] [Accepted: 03/09/2019] [Indexed: 11/21/2022] Open
Abstract
Recent psychophysical studies have demonstrated that periodic attention in the 4-8 Hz range facilitates performance on visual detection. The present study examined the periodicity of feature binding, another major function of attention, in human observers (3 females and 5 males for behavior, with 7 males added for the EEG experiment). In a psychophysical task, observers reported a synchronous pair of brightness (light/dark) and orientation (clockwise/counterclockwise) patterns from two combined brightness-orientation pairs presented in rapid succession. We found that temporal binding performance exhibits periodic oscillations at ∼8 Hz as a function of stimulus onset delay from a self-initiated button press in conditions where brightness-orientation pairs were spatially separated. However, as one would expect from previous studies on pre-attentive binding, significant oscillations were not apparent in conditions where brightness-orientation pairs were spatially superimposed. EEG results, while fully compatible with behavioral oscillations, also revealed a significant dependence of binding performance across trials on prestimulus neural oscillatory phases within the corresponding band. The peak frequency of this dependence was found to be correlated with intertrial phase coherence (ITPC) around the timing of button press in parietal sensors. Moreover, the peak frequency of the ITPC was found to predict behavioral frequency in individual observers. Together, these results suggest that attention operates periodically (at ∼8 Hz) on the perceptual binding of multimodal visual information and is mediated by neural oscillations phase-locked to voluntary action.SIGNIFICANCE STATEMENT Recent studies in neuroscience suggest that the brain's attention network operates rhythmically at 4-8 Hz. The present behavioral task revealed that attentional binding of visual features is performed periodically at ∼8 Hz, and EEG analysis showed a dependence of binding performance on prestimulus neural oscillatory phase. Furthermore, this association between perceptual and neural oscillations is triggered by voluntary action. Periodic processes driven by attention appear to contribute not only to sensory processing but also to the temporal binding of diverse information into a conscious event synchronized with action.
Collapse
|
10
|
Abstract
Simultaneity judgments were used to measure temporal binding windows (TBW) for brief binaural events (changes in interaural time and/or level differences [ITD and ILD]) and test the hypothesis that ITD and ILD contribute to perception via separate sensory dimensions subject to binding via slow (100+ ms)—presumably cortical—mechanisms as in multisensory TBW. Stimuli were continuous low-frequency noises that included two brief shifts of either type (ITD or ILD), both of which are heard as lateral position changes. TBW for judgments within a single cue dimension were narrower for ITD (mean = 444 ms) than ILD (807 ms). TBW for judgments across cue dimensions (i.e., one ITD shift and one ILD shift) were similar to within-cue ILD (778 ms). The results contradict the original hypothesis, in that cross-cue comparisons were no slower than within-cue ILD comparisons. Rather, the wide TBW values—consistent with previous estimates of multisensory TBW—suggest slow integrative processing for both types of judgments. Narrower TBW for ITD than ILD judgments suggests important cue-specific differences in the neural mechanisms or the perceptual correlates of integration across binaural-cue dimensions.
Collapse
|
11
|
Montare A. The Simplest Chronoscope V: A Theory of Dual Primary and Secondary Reaction Time Systems. Percept Mot Skills 2016; 123:654-686. [PMID: 27555368 DOI: 10.1177/0031512516664893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Extending work by Montare, visual simple reaction time, choice reaction time, discriminative reaction time, and overall reaction time scores obtained from college students by the simplest chronoscope (a falling meterstick) method were significantly faster as well as significantly less variable than scores of the same individuals from electromechanical reaction timers (machine method). Results supported the existence of dual reaction time systems: an ancient primary reaction time system theoretically activating the V5 parietal area of the dorsal visual stream that evolved to process significantly faster sensory-motor reactions to sudden stimulations arising from environmental objects in motion, and a secondary reaction time system theoretically activating the V4 temporal area of the ventral visual stream that subsequently evolved to process significantly slower sensory-perceptual-motor reactions to sudden stimulations arising from motionless colored objects.
Collapse
Affiliation(s)
- Alberto Montare
- Human Learning and Cognition Laboratory, William Paterson University, Wayne, NJ, USA
| |
Collapse
|
12
|
Abstract
Recent findings in neuroscience strongly suggest that an object's features (e.g., its color, texture, shape, etc.) are represented in separate areas of the visual cortex. Although represented in separate neuronal areas, somehow the feature representations are brought together as a single, unified object of visual consciousness. This raises a question of binding: how do neural activities in separate areas of the visual cortex function to produce a feature-unified object of visual consciousness? Several prominent neuroscientists have adopted neural synchrony and attention-based approaches to explain object feature binding. I argue that although neural synchrony and/or attentional mechanisms might function to disambiguate an object's features, it is difficult to see how either of these mechanisms could fully explain the unity of an object's features at the level of visual consciousness. After presenting a detailed critique of neural synchrony and attention-based approaches to object feature binding, I propose interactive hierarchical structuralism (IHS) . This view suggests that a unified percept (i.e., a feature-unified object of visual consciousness) is not reducible to the activity of any cognitive capacity or to any localized neural area, but emerges out of the interaction of visual information organized by spatial structuring capacities correlated with lower, higher, and intermediate levels of the visual hierarchy. After clarifying different notions of emergence and elaborating evidence for IHS, I discuss how IHS can be tested through transcranial magnetic stimulation and masking. In the final section I present some further implications/advantages of IHS.
Collapse
|
13
|
Bob P, Pec O, Mishara AL, Touskova T, Lysaker PH. Conscious brain, metacognition and schizophrenia. Int J Psychophysiol 2016; 105:1-8. [DOI: 10.1016/j.ijpsycho.2016.05.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Revised: 04/20/2016] [Accepted: 05/09/2016] [Indexed: 01/04/2023]
|
14
|
Zeki S. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems. Eur J Neurosci 2016; 44:2515-2527. [DOI: 10.1111/ejn.13270] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 04/25/2016] [Accepted: 05/03/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Semir Zeki
- Wellcome Laboratory of Neurobiology; University College London; London WC1E 6BT UK
| |
Collapse
|
15
|
Müller MM, Trautmann M, Keitel C. Early Visual Cortex Dynamics during Top–Down Modulated Shifts of Feature-Selective Attention. J Cogn Neurosci 2016; 28:643-55. [DOI: 10.1162/jocn_a_00912] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.
Collapse
|
16
|
Shevell SK, Wang W. Color-motion feature-binding errors are mediated by a higher-order chromatic representation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:A85-A92. [PMID: 26974945 PMCID: PMC5588901 DOI: 10.1364/josaa.33.000a85] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.
Collapse
Affiliation(s)
- Steven K. Shevell
- Institute for Mind and Biology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Psychology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Ophthalmology & Visual Science, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
| | - Wei Wang
- Institute for Mind and Biology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
- Department of Psychology, The University of Chicago, 940 East 57th Street, Chicago, Illinois 60637, USA
| |
Collapse
|
17
|
Abstract
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
Collapse
Affiliation(s)
- Semir Zeki
- Laboratory of Neurobiology, University College London, London WC1E 6BT, UK
| |
Collapse
|
18
|
Abstract
Area V5 of the visual brain, first identified anatomically in 1969 as a separate visual area, is critical for the perception of visual motion. As one of the most intensively studied parts of the visual brain, it has yielded many insights into how the visual brain operates. Among these are: the diversity of signals that determine the functional capacities of a visual area; the relationship between single cell activity in a specialized visual area and perception of, and preference for, attributes of a visual stimulus; the multiple asynchronous inputs into, and outputs from, an area as well as the multiple operations that it undertakes asynchronously; the relationship between activity at given, specialized, areas of the visual brain and conscious awareness; and the mechanisms used to “bind” signals from one area with those from another, with a different specialization, to give us our unitary perception of the visual world. Hence V5 is, in a sense, a microcosm of the visual world and its study gives important insights into how the whole visual brain is organized—anatomically, functionally and perceptually.
Collapse
Affiliation(s)
- Semir Zeki
- Wellcome Laboratory of Neurobiology, Cell and Developmental Biology, University College London London, UK
| |
Collapse
|
19
|
Kanaya S, Fujisaki W, Nishida S, Furukawa S, Yokosawa K. Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task. Perception 2015; 44:198-214. [DOI: 10.1068/p7753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants.
Collapse
Affiliation(s)
- Shoko Kanaya
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- National Institute of Advanced Industrial Science and Technology (AIST), Japan
- Japan Society for the Promotion of Science
| | - Waka Fujisaki
- National Institute of Advanced Industrial Science and Technology (AIST), Japan
| | - Shin'ya Nishida
- NTT Communication Science Laboratories, NTT Corporation, Japan
| | | | - Kazuhiko Yokosawa
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| |
Collapse
|
20
|
Ruhnau P, Hauswald A, Weisz N. Investigating ongoing brain oscillations and their influence on conscious perception - network states and the window to consciousness. Front Psychol 2014; 5:1230. [PMID: 25400608 PMCID: PMC4214190 DOI: 10.3389/fpsyg.2014.01230] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 10/09/2014] [Indexed: 11/13/2022] Open
Abstract
In cognitive neuroscience, prerequisites of consciousness are of high interest. Within recent years it has become more commonly understood that ongoing brain activity, mainly measured with electrophysiology, can predict whether an upcoming stimulus is consciously perceived. One approach to investigate the relationship between ongoing brain activity and conscious perception is to conduct near-threshold (NT) experiments and focus on the pre-stimulus period. The current review will, in the first part, summarize main findings of pre-stimulus research from NT experiments, mainly focusing on the alpha band (8–14 Hz). It is probable that the most prominent finding is that local (mostly sensory) areas show enhanced excitatory states prior to detection of upcoming NT stimuli, as putatively reflected by decreased alpha band power. However, the view of a solely local excitability change seems to be too narrow. In a recent paper, using a somatosensory NT task, Weisz et al. (2014) replicated the common alpha finding and, furthermore, conceptually embedded this finding into a more global framework called “Windows to Consciousness” (Win2Con). In this review, we want to further elaborate on the crucial assumption of “open windows” to conscious perception, determined by pre-established pathways connecting sensory and higher order areas. Methodologically, connectivity and graph theoretical analyses are applied to source-imaging magnetoencephalographic data to uncover brain regions with strong network integration as well as their connection patterns. Sensory regions with stronger network integration will more likely distribute information when confronted with weak NT stimuli, favoring its subsequent conscious perception. First experimental evidence confirms our aforementioned “open window” hypothesis. We therefore emphasize that future research on prerequisites of consciousness needs to move on from investigating solely local excitability to a more global view of network connectivity.
Collapse
Affiliation(s)
- Philipp Ruhnau
- Center for Mind/Brain Sciences, University of Trento Trento, Italy
| | - Anne Hauswald
- Center for Mind/Brain Sciences, University of Trento Trento, Italy
| | - Nathan Weisz
- Center for Mind/Brain Sciences, University of Trento Trento, Italy
| |
Collapse
|
21
|
Abstract
Psychophysical experiments show that two different visual attributes, color and motion, processed in different areas of the visual brain, are perceived at different times relative to each other (Moutoussis and Zeki, 1997a). Here we demonstrate psychophysically that two variants of the same attribute, motion, which have the same temporal structure and are processed in the same visual areas, are also processed asynchronously. When subjects were asked to pair up–down motion of dots in one half of their hemifield with up-right motion in the other, they perceived the two directions of motion asynchronously, with the advantage in favor of up-right motion; when they were asked to pair the motion of white dots moving against a black background with that of red dots moving against an equiluminant green background, they perceived the luminant motion first, thus demonstrating a perceptual advantage of luminant over equiluminant motion. These results were not affected by motion speed or perceived motion “streaks.” We thus interpret these results to reflect the different processing times produced by luminant and equiluminant motion stimuli or by different degrees of motion direction change, thus adding to the evidence that processing time within the visual system is a major determinant of perceptual time.
Collapse
Affiliation(s)
- Yu Tung Lo
- Wellcome Laboratory of Neurobiology, University College London London, UK
| | - Semir Zeki
- Wellcome Laboratory of Neurobiology, University College London London, UK
| |
Collapse
|
22
|
Bob P. Psychophysiology of dissociated consciousness. Curr Top Behav Neurosci 2014; 21:3-21. [PMID: 24850082 DOI: 10.1007/7854_2014_320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Recent study of consciousness provides an evidence that there is a limit of consciousness, which presents a barrier between conscious and unconscious processes. This barrier likely is specifically manifested as a disturbance of neural mechanisms of consciousness that through distributed brain processing, attentional mechanisms and memory processes enable to constitute integrative conscious experience. According to recent findings a level of conscious integration may change during certain conditions related to experimental cognitive manipulations, hypnosis, or stressful experiences that can lead to dissociation of consciousness. In psychopathological research the term dissociation was proposed by Pierre Janet for explanation of processes related to splitting of consciousness due to traumatic events or during hypnosis. According to several recent findings dissociation of consciousness likely is related to deficits in global distribution of information and may lead to heightened levels of "neural complexity" that reflects brain integration or differentiation based on numbers of independent neural processes in the brain that may be specifically related to various mental disorders.
Collapse
Affiliation(s)
- Petr Bob
- Center for Neuropsychiatric Research of Traumatic Stress, Department of Psychiatry and UHSL, 1st Faculty of Medicine, Charles University, Ke Karlovu 11, 128 00, Prague, Czech Republic,
| |
Collapse
|
23
|
Kalbfleisch ML, Debettencourt MT, Kopperman R, Banasiak M, Roberts JM, Halavi M. Environmental influences on neural systems of relational complexity. Front Psychol 2013; 4:631. [PMID: 24133465 PMCID: PMC3783983 DOI: 10.3389/fpsyg.2013.00631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2013] [Accepted: 08/26/2013] [Indexed: 11/30/2022] Open
Abstract
Constructivist learning theory contends that we construct knowledge by experience and that environmental context influences learning. To explore this principle, we examined the cognitive process relational complexity (RC), defined as the number of visual dimensions considered during problem solving on a matrix reasoning task and a well-documented measure of mature reasoning capacity. We sought to determine how the visual environment influences RC by examining the influence of color and visual contrast on RC in a neuroimaging task. To specify the contributions of sensory demand and relational integration to reasoning, our participants performed a non-verbal matrix task comprised of color, no-color line, or black-white visual contrast conditions parametrically varied by complexity (relations 0, 1, 2). The use of matrix reasoning is ecologically valid for its psychometric relevance and for its potential to link the processing of psychophysically specific visual properties with various levels of RC during reasoning. The role of these elements is important because matrix tests assess intellectual aptitude based on these seemingly context-less exercises. This experiment is a first step toward examining the psychophysical underpinnings of performance on these types of problems. The importance of this is increased in light of recent evidence that intelligence can be linked to visual discrimination. We submit three main findings. First, color and black-white visual contrast (BWVC) add demand at a basic sensory level, but contributions from color and from BWVC are dissociable in cortex such that color engages a “reasoning heuristic” and BWVC engages a “sensory heuristic.” Second, color supports contextual sense-making by boosting salience resulting in faster problem solving. Lastly, when visual complexity reaches 2-relations, color and visual contrast relinquish salience to other dimensions of problem solving.
Collapse
Affiliation(s)
- M Layne Kalbfleisch
- KIDLAB, Krasnow Institute for Advanced Study, George Mason University Fairfax, VA, USA ; Graduate Neuroscience, College of Science, George Mason University Fairfax, VA, USA ; College of Education and Human Development, George Mason University Fairfax, VA, USA ; Department of Pediatrics, The George Washington School of Medicine and Health Sciences Washington, DC, USA
| | | | | | | | | | | |
Collapse
|
24
|
Vrečko A, Leonardis A, Skočaj D. Modeling binding and cross-modal learning in Markov logic networks. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.01.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
25
|
|
26
|
Cheadle SW, Zeki S. Masking within and across visual dimensions: psychophysical evidence for perceptual segregation of color and motion. Vis Neurosci 2011; 28:445-51. [PMID: 21835096 PMCID: PMC3472342 DOI: 10.1017/s0952523811000228] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Visual masking can result from the interference of perceptual signals. According to the principle of functional specialization, interference should be greatest when signal and mask belong to the same visual attribute (e.g., color or motion) and least when they belong to different ones. We provide evidence to support this view and show that the time course of masking is visual attribute specific. First, we show that a color target is masked most effectively by color (homogeneous target-mask pair) and least effectively by motion (heterogeneous pair) and vice versa for a motion target. Second, we show that the time at which the mask is most effective depends strongly on the target-mask pairing. Heterogeneous masking is strongest when the mask is presented before the target (forward masking) but this is not true of homogeneous masking. This finding supports a delayed cross-feature interaction due to segregated processing sites. Third, lengthening the stimulus onset asynchrony between target and mask leads to a faster improvement in color than in motion detectability, lending support for a faster color processing system and consistent with reports of perceptual asynchrony in vision. In summary, we present three lines of psychophysical evidence, all of which support a segregated neural coding scheme for color and motion in the human brain.
Collapse
Affiliation(s)
- Samuel W Cheadle
- Wellcome Laboratory of Neurobiology, Anatomy Department, University College London, London, UK.
| | | |
Collapse
|
27
|
Kukleta M, Bob P, Brázdil M, Roman R, Rektor I. The level of frontal-temporal beta-2 band EEG synchronization distinguishes anterior cingulate cortex from other frontal regions. Conscious Cogn 2010; 19:879-86. [DOI: 10.1016/j.concog.2010.04.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 04/07/2010] [Accepted: 04/10/2010] [Indexed: 10/19/2022]
|
28
|
Knyazeva MG, Carmeli C, Fornari E, Meuli R, Small M, Frackowiak RS, Maeder P. Binding under conflict conditions: state-space analysis of multivariate EEG synchronization. J Cogn Neurosci 2010; 23:2363-75. [PMID: 20946055 DOI: 10.1162/jocn.2010.21588] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Real-world objects are often endowed with features that violate Gestalt principles. In our experiment, we examined the neural correlates of binding under conflict conditions in terms of the binding-by-synchronization hypothesis. We presented an ambiguous stimulus ("diamond illusion") to 12 observers. The display consisted of four oblique gratings drifting within circular apertures. Its interpretation fluctuates between bound ("diamond") and unbound (component gratings) percepts. To model a situation in which Gestalt-driven analysis contradicts the perceptually explicit bound interpretation, we modified the original diamond (OD) stimulus by speeding up one grating. Using OD and modified diamond (MD) stimuli, we managed to dissociate the neural correlates of Gestalt-related (OD vs. MD) and perception-related (bound vs. unbound) factors. Their interaction was expected to reveal the neural networks synchronized specifically in the conflict situation. The synchronization topography of EEG was analyzed with the multivariate S-estimator technique. We found that good Gestalt (OD vs. MD) was associated with a higher posterior synchronization in the beta-gamma band. The effect of perception manifested itself as reciprocal modulations over the posterior and anterior regions (theta/beta-gamma bands). Specifically, higher posterior and lower anterior synchronization supported the bound percept, and the opposite was true for the unbound percept. The interaction showed that binding under challenging perceptual conditions is sustained by enhanced parietal synchronization. We argue that this distributed pattern of synchronization relates to the processes of multistage integration ranging from early grouping operations in the visual areas to maintaining representations in the frontal networks of sensory memory.
Collapse
Affiliation(s)
- Maria G Knyazeva
- Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland.
| | | | | | | | | | | | | |
Collapse
|
29
|
Fujisaki W, Nishida S. A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities. Proc Biol Sci 2010; 277:2281-90. [PMID: 20335212 DOI: 10.1098/rspb.2010.0243] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information ('what') across different channels, temporal coincidence ('when') along with spatial coincidence ('where') provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2-3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4-9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed 'what' and 'when' properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in 'when' processing, the binding performance reflects the central temporal limit of integrating 'when' and 'what' properties.
Collapse
Affiliation(s)
- Waka Fujisaki
- National Institute of Advanced Industrial Science and Technology, Tsukuba Central 6, 1-1-1, Higashi, Tsukuba, Ibaraki 305-8566, Japan.
| | | |
Collapse
|
30
|
Tyler CW, Likova LT. An algebra for the analysis of object encoding. Neuroimage 2009; 50:1243-50. [PMID: 20025978 DOI: 10.1016/j.neuroimage.2009.10.091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2009] [Revised: 09/29/2009] [Accepted: 10/08/2009] [Indexed: 10/20/2022] Open
Abstract
The encoding of the objects from the world around us is one of the major topics of cognitive psychology, yet the principles of object coding in the human brain remain unresolved. Beyond referring to the particular features commonly associated with objects, our ability to categorize and discuss objects in detailed linguistic propositions implies that we have access to generic concepts of each object category with well-specified boundaries between them. Consideration of the nature of generic object concepts reveals that they must have the structure of a probabilistic list array specifying the Bayesian prior on all possible features that the object can possess, together with mutual covariance matrices among the features. Generic object concepts must also be largely context independent for propositions to have communicable meaning. Although, there is good evidence for local feature processing in the occipital lobe and specific responses for a few basic object categories in the posterior temporal lobe, the encoding of the generic object concepts remains obscure. We analyze the conceptual underpinnings of the study of object encoding, draw some necessary clarifications in relation to its modality-specific and amodal aspects, and propose an analytic algebra with specific reference to functional Magnetic Resonance Imaging approaches to the issue of how generic (amodal) object concepts are encoded in the human brain.
Collapse
|
31
|
Seymour KJ, Scott McDonald J, Clifford CWG. Failure of colour and contrast polarity identification at threshold for detection of motion and global form. Vision Res 2009; 49:1592-8. [PMID: 19341760 DOI: 10.1016/j.visres.2009.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2009] [Revised: 03/17/2009] [Accepted: 03/23/2009] [Indexed: 11/24/2022]
Abstract
We used identification at threshold to systematically measure binding costs in two visual modalities. We presented a conjunction of two features as a signal stimulus and concurrently measured detection and identification performance as a function of three threshold variables: duration, contrast and coherence. Discrepancies between detection and identification sensitivity functions demonstrated a consistent processing cost to visual feature binding. Our findings suggest that feature binding is indeed a genuine problem for the brain to solve. This simple paradigm can transfer across arbitrary feature combinations and is therefore suitable to use in experiments addressing mechanisms of sensory integration.
Collapse
Affiliation(s)
- Kiley J Seymour
- School of Psychology, Colour Form Motion Lab, University of Sydney, Sydney, NSW, Australia.
| | | | | |
Collapse
|
32
|
Seymour K, Clifford CW, Logothetis NK, Bartels A. The Coding of Color, Motion, and Their Conjunction in the Human Visual Cortex. Curr Biol 2009; 19:177-83. [PMID: 19185496 DOI: 10.1016/j.cub.2008.12.050] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2008] [Revised: 11/18/2008] [Accepted: 12/10/2008] [Indexed: 11/25/2022]
|
33
|
Holcombe AO. Temporal binding favours the early phase of colour changes, but not of motion changes, yielding the colour–motion asynchrony illusion. VISUAL COGNITION 2009. [DOI: 10.1080/13506280802340653] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
34
|
Cavanagh P, Holcombe AO, Chou W. Mobile computation: spatiotemporal integration of the properties of objects in motion. J Vis 2008; 8:1.1-23. [PMID: 18831615 DOI: 10.1167/8.12.1] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2007] [Accepted: 03/10/2008] [Indexed: 11/24/2022] Open
Abstract
We demonstrate that, as an object moves, color and motion signals from successive, widely spaced locations are integrated, but letter and digit shapes are not. The features that integrate as an object moves match those that integrate when the eyes move but the object is stationary (spatiotopic integration). We suggest that this integration is mediated by large receptive fields gated by attention and that it occurs for surface features (motion and color) that can be summed without precise alignment but not shape features (letters or digits) that require such alignment. Rapidly alternating pairs of colors and motions were presented at several locations around a circle centered at fixation. The same two stimuli alternated at each location with the phase of the alternation reversing from one location to the next. When observers attended to only one location, the stimuli alternated in both retinal coordinates and in the attended stream: feature identification was poor. When the observer's attention shifted around the circle in synchrony with the alternation, the stimuli still alternated at each location in retinal coordinates, but now attention always selected the same color and motion, with the stimulus appearing as a single unchanging object stepping across the locations. The maximum presentation rate at which the color and motion could be reported was twice that for stationary attention, suggesting (as control experiments confirmed) object-based integration of these features. In contrast, the identification of a letter or digit alternating with a mask showed no advantage for moving attention despite the fact that moving attention accessed (within the limits of precision for attentional selection) only the target and never the mask. The masking apparently leaves partial information that cannot be integrated across locations, and we speculate that for spatially defined patterns like letters, integration across large shifts in location may be limited by problems in aligning successive samples. Our results also suggest that as attention moves, the selection of any given location (dwell time) can be as short as 50 ms, far shorter than the typical dwell time for stationary attention. Moving attention can therefore sample a brief instant of a rapidly changing stream if it passes quickly through, giving access to events that are otherwise not seen.
Collapse
Affiliation(s)
- Patrick Cavanagh
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | | | | |
Collapse
|
35
|
Hocking J, Price CJ. The influence of colour and sound on neuronal activation during visual object naming. Brain Res 2008; 1241:92-102. [PMID: 18789907 PMCID: PMC2693529 DOI: 10.1016/j.brainres.2008.08.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Revised: 08/04/2008] [Accepted: 08/10/2008] [Indexed: 11/18/2022]
Abstract
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Collapse
Affiliation(s)
- Julia Hocking
- Centre for Magnetic Resonance, The University of Queensland, Brisbane, Australia.
| | | |
Collapse
|
36
|
Kent C, Lamberts K. The encoding-retrieval relationship: retrieval as mental simulation. Trends Cogn Sci 2008; 12:92-8. [PMID: 18262827 DOI: 10.1016/j.tics.2007.12.004] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2007] [Revised: 12/18/2007] [Accepted: 12/20/2007] [Indexed: 10/22/2022]
Abstract
There is increasing evidence to suggest that mental simulations underlie many cognitive processes. We review results from three rapidly developing research areas suggesting that simulations underlie information retrieval. First, neuroimaging work indicates that cortical circuits that were activated during encoding are reactivated during retrieval. Second, retrieval is aided by behavioural re-enactment of processes involved in encoding, including re-enactment of encoding eye movements. Third, the time courses of encoding of visual features and the retrieval of information about those features are related. Overall, the evidence suggests that the often observed interactions between encoding and retrieval result from a cognitive system that, at least partially, reactivates processes that were involved in encoding to retrieve information.
Collapse
Affiliation(s)
- Christopher Kent
- Department of Experimental Psychology, University of Bristol, 12a Priory Road, Bristol, BS8 1TU, UK.
| | | |
Collapse
|
37
|
|
38
|
Benjamins JS, van der Smagt MJ, Verstraten FAJ. Matching Auditory and Visual Signals: Is Sensory Modality Just Another Feature? Perception 2008; 37:848-58. [DOI: 10.1068/p5783] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory–visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory–visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.
Collapse
Affiliation(s)
- Jeroen S Benjamins
- Experimental Psychology Division, Helmholtz Institute, Utrecht University, Heidelberglaan 2, NL-3584 CS Utrecht, The Netherlands
| | - Maarten J van der Smagt
- Experimental Psychology Division, Helmholtz Institute, Utrecht University, Heidelberglaan 2, NL-3584 CS Utrecht, The Netherlands
| | - Frans A J Verstraten
- Experimental Psychology Division, Helmholtz Institute, Utrecht University, Heidelberglaan 2, NL-3584 CS Utrecht, The Netherlands
| |
Collapse
|