1
|
Ramezanpour H, Fallah M. The role of temporal cortex in the control of attention. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100038. [PMID: 36685758 PMCID: PMC9846471 DOI: 10.1016/j.crneur.2022.100038] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 02/05/2022] [Accepted: 04/01/2022] [Indexed: 01/25/2023] Open
Abstract
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
Collapse
Affiliation(s)
- Hamidreza Ramezanpour
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Corresponding author. Centre for Vision Research, York University, Toronto, Ontario, Canada.
| | - Mazyar Fallah
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Department of Psychology, Faculty of Health, York University, Toronto, Ontario, Canada,Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada,Corresponding author. Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada.
| |
Collapse
|
2
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
3
|
Elmer S, Meyer M, Jäncke L. The spatiotemporal characteristics of elementary audiovisual speech and music processing in musically untrained subjects. Int J Psychophysiol 2012; 83:259-68. [DOI: 10.1016/j.ijpsycho.2011.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2010] [Revised: 06/16/2011] [Accepted: 09/11/2011] [Indexed: 10/17/2022]
|
4
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/9781439812174-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
5
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/b11092-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
6
|
Kaposvári P, Csibri P, Csete G, Tompa T, Sáry G. Auditory modulation of the inferior temporal cortex neurons in rhesus monkey. Physiol Res 2011; 60:S93-9. [PMID: 21777030 DOI: 10.33549/physiolres.932172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
We performed a systematic study to check whether neurons in the area TE (the anterior part of inferotemporal cortex) in rhesus monkey, regarded as the last stage of the ventral visual pathway, could be modulated by auditory stimuli. Two fixating rhesus monkeys were presented with visual, auditory or combined audiovisual stimuli while neuronal responses were recorded. We have found that the visually sensitive neurons are also modulated by audiovisual stimuli. This modulation is manifested as the change of response rate. Our results have shown also that the visual neurons were responsive to the sole auditory stimuli. Therefore, the concept of inferotemporal cortex unimodality in information processing should be re-evaluated.
Collapse
Affiliation(s)
- P Kaposvári
- Department of Physiology, University of Szeged, Szeged, Hungary
| | | | | | | | | |
Collapse
|
7
|
Tompa T, Sáry G. A review on the inferior temporal cortex of the macaque. ACTA ACUST UNITED AC 2010; 62:165-82. [PMID: 19853626 DOI: 10.1016/j.brainresrev.2009.10.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2009] [Revised: 10/14/2009] [Accepted: 10/14/2009] [Indexed: 10/20/2022]
|
8
|
Cappe C, Rouiller EM, Barone P. Multisensory anatomical pathways. Hear Res 2009; 258:28-36. [PMID: 19410641 DOI: 10.1016/j.heares.2009.04.017] [Citation(s) in RCA: 142] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2009] [Revised: 04/21/2009] [Accepted: 04/21/2009] [Indexed: 11/16/2022]
Affiliation(s)
- C Cappe
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne, rue du Bugnon 46, 1011 Lausanne, Switzerland.
| | | | | |
Collapse
|
9
|
van Wassenhove V, Grant KW, Poeppel D. Visual speech speeds up the neural processing of auditory speech. Proc Natl Acad Sci U S A 2005; 102:1181-6. [PMID: 15647358 PMCID: PMC545853 DOI: 10.1073/pnas.0408949102] [Citation(s) in RCA: 551] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2004] [Indexed: 11/18/2022] Open
Abstract
Synchronous presentation of stimuli to the auditory and visual systems can modify the formation of a percept in either modality. For example, perception of auditory speech is improved when the speaker's facial articulatory movements are visible. Neural convergence onto multisensory sites exhibiting supra-additivity has been proposed as the principal mechanism for integration. Recent findings, however, have suggested that putative sensory-specific cortices are responsive to inputs presented through a different modality. Consequently, when and where audiovisual representations emerge remain unsettled. In combined psychophysical and electroencephalography experiments we show that visual speech speeds up the cortical processing of auditory signals early (within 100 ms of signal onset). The auditory-visual interaction is reflected as an articulator-specific temporal facilitation (as well as a nonspecific amplitude reduction). The latency facilitation systematically depends on the degree to which the visual signal predicts possible auditory targets. The observed auditory-visual data support the view that there exist abstract internal representations that constrain the analysis of subsequent speech inputs. This is evidence for the existence of an "analysis-by-synthesis" mechanism in auditory-visual speech perception.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- Neuroscience and Cognitive Science Program and Department of Biology, University of Maryland, College Park, MD 20742, USA
| | | | | |
Collapse
|
10
|
Calvert GA, Thesen T. Multisensory integration: methodological approaches and emerging principles in the human brain. ACTA ACUST UNITED AC 2005; 98:191-205. [PMID: 15477032 DOI: 10.1016/j.jphysparis.2004.03.018] [Citation(s) in RCA: 218] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Understanding the conditions under which the brain integrates the different sensory streams and the mechanisms supporting this phenomenon is now a question at the forefront of neuroscience. In this paper, we discuss the opportunities for investigating these multisensory processes using modern imaging techniques, the nature of the information obtainable from each method and their benefits and limitations. Despite considerable variability in terms of paradigm design and analysis, some consistent findings are beginning to emerge. The detection of brain activity in human neuroimaging studies that resembles multisensory integration responses at the cellular level in other species, suggests similar crossmodal binding mechanisms may be operational in the human brain. These mechanisms appear to be distributed across distinct neuronal networks that vary depending on the nature of the shared information between different sensory cues. For example, differing extents of correspondence in time, space or content seem to reliably bias the involvement of different integrative networks which code for these cues. A combination of data obtained from haemodynamic and electromagnetic methods, which offer high spatial or temporal resolution respectively, are providing converging evidence of multisensory interactions at both "early" and "late" stages of processing--suggesting a cascade of synergistic processes operating in parallel at different levels of the cortex.
Collapse
Affiliation(s)
- Gemma A Calvert
- University Laboratory of Physiology, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | |
Collapse
|
11
|
Arndt PA, Colonius H. Two stages in crossmodal saccadic integration: evidence from a visual-auditory focused attention task. Exp Brain Res 2003; 150:417-26. [PMID: 12728291 DOI: 10.1007/s00221-003-1424-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2001] [Accepted: 01/15/2003] [Indexed: 11/27/2022]
Abstract
Saccadic reaction time (SRT) toward a visual target stimulus was measured under simultaneous presentation of an auditory non-target (accessory stimulus). Horizontal position of the target was varied (25 degrees left and right of fixation) as well as position and intensity of the auditory accessory. SRT was reduced under the presence of the accessory, and it decreased both with increasing intensity of the auditory accessory and with decreasing distance between target and accessory. The absence of a significant interaction between distance and auditory intensity suggests (1) that the intensity of the accessory stimulus has no direct influence on the process of crossmodal integration, and (2) that spatial position and intensity of the accessory are processed in separate stages. This was supported by a probability inequality test showing that the amount of neural coactivation depends on spatial distance but not on auditory intensity. The results are discussed in the framework of a two-stage model assuming separate processing of unimodal and bimodal characteristics of the stimuli. These results are related to several recent neurophysiological findings.
Collapse
Affiliation(s)
- Petra A Arndt
- Institut für Kognitionsforschung, Universität Oldenburg, FB 5-A6, 26111, Oldenburg, Germany.
| | | |
Collapse
|
12
|
Keele SW, Ivry R, Mayr U, Hazeltine E, Heuer H. The cognitive and neural architecture of sequence representation. Psychol Rev 2003; 110:316-39. [PMID: 12747526 DOI: 10.1037/0033-295x.110.2.316] [Citation(s) in RCA: 322] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The authors theorize that 2 neurocognitive sequence-learning systems can be distinguished in serial reaction time experiments, one dorsal (parietal and supplementary motor cortex) and the other ventral (temporal and lateral prefrontal cortex). Dorsal system learning is implicit and associates noncategorized stimuli within dimensional modules. Ventral system learning can be implicit or explicit It also allows associating events across dimensions and therefore is the basis of cross-task integration or interference, depending on degree of cross-task correlation of signals. Accordingly, lack of correlation rather than limited capacity is responsible for dual-task effects on learning. The theory is relevant to issues of attentional effects on learning; the representational basis of complex, sequential skills; hippocampal-versus basal ganglia-based learning; procedural versus declarative memory; and implicit versus explicit memory.
Collapse
|
13
|
Calvert GA, Hansen PC, Iversen SD, Brammer MJ. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. Neuroimage 2001; 14:427-38. [PMID: 11467916 DOI: 10.1006/nimg.2001.0812] [Citation(s) in RCA: 317] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).
Collapse
Affiliation(s)
- G A Calvert
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), Oxford, OX3 1DU, UK
| | | | | | | |
Collapse
|
14
|
Kerkhoff G, Artinger F, Ziegler W. Contrasting spatial hearing deficits in hemianopia and spatial neglect. Neuroreport 1999; 10:3555-60. [PMID: 10619643 DOI: 10.1097/00001756-199911260-00017] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Spatial hearing deficits have been described in widely differing pathologies, including bilateral temporal or unilateral parietal lesions, hemispherectomy, spatial neglect and right-sided cortical lesions without neglect. However, the topography of spatial hearing deficits after cortical lesions is only poorly understood, unlike that of vision and touch. We investigated the auditory subjective straight ahead (SSA) with a new technique of binaural sound source simulation using broad-band single pulses which were filtered with head-related transfer functions and delivered with a 5 degree resolution over headphones in front space. Normal subjects showed quite accurate judgments of the SSA, with a small but significant shift to the left of centre (-1.7 degrees) in the horizontal plane. Hemineglect without a scotoma, produced a large ipsilesional deviation of the auditory SSA (+22 degrees), while two hemianopic subjects, both without neglect, showed the opposite deviation of their perceived auditory SSA towards their contralesional, blind hemifield (+10 vs -28 degrees). Two control patients with unilateral lesions, both without neglect and without hemianopia, produced normal judgments of their auditory SSA (-3.0 degrees, +3.8 degrees). These results suggest at least two contrasting influences on directional spatial hearing after unilateral cortical lesions: hemianopia vs hemispatial neglect. The results are interpreted in favour of multisensory convergence of visual and auditory information in directional spatial hearing.
Collapse
Affiliation(s)
- G Kerkhoff
- EKN-Clinical Neuropsychology Research Group, Department of Neuropsychology, City Hospital Bogenhausen, Munich, Germany
| | | | | |
Collapse
|
15
|
Giard MH, Peronnet F. Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study. J Cogn Neurosci 1999; 11:473-90. [PMID: 10511637 DOI: 10.1162/089892999563544] [Citation(s) in RCA: 718] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.
Collapse
Affiliation(s)
- M H Giard
- INSERM-U280, 151, Cours Albert Thomas, F-69424 Lyon Cedex 03, FRANCE.
| | | |
Collapse
|
16
|
Savaki HE, Dalezios Y. 14C-deoxyglucose mapping of the monkey brain during reaching to visual targets. Prog Neurobiol 1999; 58:473-540. [PMID: 10408655 DOI: 10.1016/s0301-0082(98)00080-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The strategies used by the macaca monkey brain in controlling the performance of a reaching movement to a visual target have been studied by the quantitative autoradiographic 14C-DG method. Experiments on visually intact monkeys reaching to a visual target indicate that V1 and V2 convey visuomotor information to the cortex of the superior temporal and parietoccipital sulci which may encode the position of the moving forelimb, and to the cortex in the ventral part and lateral bank of the intraparietal sulcus which may encode the location of the visual target. The involvement of the medial bank of the intraparietal sulcus in proprioceptive guidance of movement is also suggested on the basis of the parallel metabolic effects estimated in this region and in the forelimb representations of the primary somatosensory and motor cortices. The network including the inferior postarcuate skeletomotor and prearcuate oculomotor cortical fields and the caudal periprincipal area 46 may participate in sensory-to-motor and oculomotor-to-skeletomotor transformations, in parallel with the medial and lateral intraparietal cortices. Experiments on split brain monkeys reaching to visual targets revealed that reaching is always controlled by the hemisphere contralateral to the moving forelimb whether it is visually intact or 'blind'. Two supplementary mechanisms compensate for the 'blindness' of the hemisphere controlling the moving forelimb. First, the information about the location of the target is derived from head and eye movements and is sent to the 'blind' hemisphere via inferior parietal cortical areas, while the information about the forelimb position is derived from proprioceptive mechanisms and is sent via the somatosensory and superior parietal cortices. Second, the cerebellar hemispheric extensions of vermian lobules V, VI and VIII, ipsilateral to the moving forelimb, combine visual and oculomotor information about the target position, relayed by the 'seeing' cerebral hemisphere, with sensorimotor information concerning cortical intended and peripheral actual movements of the forelimb, and then send this integrated information back to the motor cortex of the 'blind' hemisphere, thus enabling it to guide the contralateral forelimb to the target.
Collapse
Affiliation(s)
- H E Savaki
- Department of Basic Sciences, School of Health Sciences, University of Crete, Iraklion, Greece.
| | | |
Collapse
|
17
|
Stein BE, Wallace MT. Comparisons of cross-modality integration in midbrain and cortex. PROGRESS IN BRAIN RESEARCH 1996; 112:289-99. [PMID: 8979836 DOI: 10.1016/s0079-6123(08)63336-1] [Citation(s) in RCA: 72] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Multisensory neurons are abundant in the superior colliculus and anterior ectosylvian cortex of the cat. Despite the fact that these areas receive inputs from different regions, and are likely to be involved in different functional roles, there multisensory neurons have many fundamental similarities. They all have multiple receptive fields, one for each sensory input, and these receptive fields overlap one another. It is this spatial correspondence among receptive fields that determines the manner in which both populations of neurons integrate the inputs they receive from different sensory channels. Several principles of integration characterize both cortical and midbrain multisensory neurons, and these constancies in the fundamentals of cross-modality integration are likely to provide a basis for coherence at different levels of the neuraxis. Yet there are also obvious differences in these populations of multisensory neurons. Cortical receptive fields are significantly larger than those in the midbrain, have a lower incidence of suppressive surrounds, and exhibit less cross-modality inhibitory interactions than in the midbrain. Presumably, these differences reflect a greater emphasis on non-spatial aspects of cross-modality integration in cortex than is required by the orientation and localization functions mediated by the superior colliculus.
Collapse
Affiliation(s)
- B E Stein
- Department of Neurobiology and Anatomy, Bowman Gray School of Medicine/Wake Forest University, Winston-Salem, NC 27157-1010, USA
| | | |
Collapse
|
18
|
Watanabe J, Iwai E. Neuronal activity in monkey visual areas V1, V2, V4 and TEO during fixation task. Brain Res Bull 1996; 40:143-50. [PMID: 8724433 DOI: 10.1016/0361-9230(95)02147-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We analyzed 577 neurons recorded from visual areas V1, V2, V4, and the inferotemporal area (TEO) of macaque monkeys, which performed a visual fixation task and a spot-off-on (blink) test during the fixation period. Among these neurons, 35% were defined as task-related cells, because they gave responses at the task-start, fixation, or task-end periods but were unresponsive to the spot blink, which was physically identical to these stimuli. Blink-responsive cells accounted for 29% and task-unresponsive cells for 30% of the neurons. The task-related response was large and frequent in V4 (34%) and TEO (41%), but small and less frequent in V1 (31%) and V2 (27%). Other observations further demonstrated nonsensory activities in these areas: In some cells, response to the fixation spot was inhibitory, whereas light stimulation on the fovea was excitatory; some V1 and V2 cells had color-irrelevant responses, and some cells responded to the spot-off only when the monkey regarded it as a task-end cue.
Collapse
Affiliation(s)
- J Watanabe
- Department of Behavioral Physiology, Tokyo Metropolitan Institute for Neuroscience, Japan
| | | |
Collapse
|