1
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
2
|
Sharman RJ, Gheorghiu E. The role of motion and number of element locations in mirror symmetry perception. Sci Rep 2017; 7:45679. [PMID: 28374760 PMCID: PMC5379492 DOI: 10.1038/srep45679] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Accepted: 03/02/2017] [Indexed: 11/10/2022] Open
Abstract
The human visual system has specialised mechanisms for encoding mirror-symmetry and for detecting symmetric motion-directions for objects that loom or recede from the observers. The contribution of motion to mirror-symmetry perception has never been investigated. Here we examine symmetry detection thresholds for stationary (static and dynamic flicker) and symmetrically moving patterns (inwards, outwards, random directions) with and without positional symmetry. We also measured motion detection and direction-discrimination thresholds for horizontal (left, right) and symmetrically moving patterns with and without positional symmetry. We found that symmetry detection thresholds were (a) significantly higher for static patterns, but there was no difference between the dynamic flicker and symmetrical motion conditions, and (b) higher than motion detection and direction-discrimination thresholds for horizontal or symmetrical motion, with or without positional symmetry. In addition, symmetrical motion was as easy to detect or discriminate as horizontal motion. We conclude that whilst symmetrical motion per se does not contribute to symmetry perception, limiting the lifetime of pattern elements does improve performance by increasing the number of element-locations as elements move from one location to the next. This may be explained by a temporal integration process in which weak, noisy symmetry signals are combined to produce a stronger signal.
Collapse
Affiliation(s)
- Rebecca J Sharman
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| | - Elena Gheorghiu
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| |
Collapse
|
3
|
Tannazzo T, Kurylo DD, Bukhari F. Perceptual grouping across eccentricity. Vision Res 2014; 103:101-8. [PMID: 25175117 DOI: 10.1016/j.visres.2014.08.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2014] [Revised: 08/06/2014] [Accepted: 08/20/2014] [Indexed: 10/24/2022]
Abstract
Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field.
Collapse
Affiliation(s)
- Teresa Tannazzo
- Psychology Department, St. Joseph's College, Patchogue, NY 11772, United States; Psychology Department, Brooklyn College, Brooklyn, NY 11210, United States
| | - Daniel D Kurylo
- Psychology Department, Brooklyn College, Brooklyn, NY 11210, United States.
| | - Farhan Bukhari
- Department of Computer Science, The Graduate Center CUNY, New York, NY 10016, United States
| |
Collapse
|
4
|
Nygård GE, Looy TV, Wagemans J. The influence of orientation jitter and motion on contour saliency and object identification. Vision Res 2009; 49:2475-84. [PMID: 19665470 DOI: 10.1016/j.visres.2009.08.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2008] [Revised: 07/22/2009] [Accepted: 08/02/2009] [Indexed: 11/17/2022]
Abstract
One of the ultimate goals of vision research is to understand how some elements are grouped together and differentiated from others to form object representations in a complex visual scene. There exists an extensive literature on this grouping/segmentation problem, but most of the studies have used un-recognizable stimuli that have little to do with object recognition per se. We used Gabor-rendered outlines of real-world objects to study some relationships between bottom-up and top-down processes in both spatial- and motion form perception. We manipulated low-level properties, such as element orientation and local motion, while incorporating higher-level properties, such as object complexity and identity, and found that adding local motion improved overall performance in both object detection and object identification tasks. Adding orientation jitter effectively decreased object detection performance in both static and motion conditions, and increased reaction time for identification in the static condition. Orientation jitter had much less effect on reaction times for identification in the local motion condition than in the static condition. Both contour properties ("good continuation") and object properties (identifiability) had a positive effect on detection and reaction time for identification.
Collapse
Affiliation(s)
- Geir Eliassen Nygård
- Laboratory of Experimental Psychology, University of Leuven, B-3000 Leuven, Belgium
| | | | | |
Collapse
|
5
|
Segaert K, Nygård GE, Wagemans J. Identification of everyday objects on the basis of kinetic contours. Vision Res 2009; 49:417-28. [DOI: 10.1016/j.visres.2008.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2008] [Revised: 11/09/2008] [Accepted: 11/11/2008] [Indexed: 11/30/2022]
|
6
|
Caplovitz GP, Tse PU. Rotating dotted ellipses: motion perception driven by grouped figural rather than local dot motion signals. Vision Res 2007; 47:1979-91. [PMID: 17548102 DOI: 10.1016/j.visres.2006.12.022] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2006] [Revised: 11/19/2006] [Accepted: 12/11/2006] [Indexed: 11/28/2022]
Abstract
UNLABELLED Unlike the motion of a continuous contour, the motion of a single dot is unambiguous and immune to the aperture problem. Here we exploit this fact to explore the conditions under which unambiguous local motion signals are used to drive global percepts of an ellipse undergoing rotation. In previous work, we have shown that a thin, high aspect ratio ellipse will appear to rotate faster than a lower aspect ratio ellipse even when the two in fact rotate at the same angular velocity [Caplovitz, G. P., Hsieh, P. -J., & Tse, P. U. (2006) Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Research, 46(18), 2877-2893]. In this study we examined the perceived speed of rotation of ellipses defined by a virtual contour made up of evenly spaced dots. RESULTS Ellipses defined by closely spaced dots exhibit the speed illusion observed with continuous contours. That is, thin dotted ellipses appear to rotate faster than fat dotted ellipses when both rotate at the same angular velocity. This illusion is not observed if the dots defining the ellipse are spaced too widely apart. A control experiment ruled out low spatial frequency "blurring" as the source of the illusory percept. CONCLUSION Even in the presence of local motion signals that are immune to the aperture problem, the global percept of an ellipse undergoing rotation can be driven by potentially ambiguous motion signals arising from the non-local form of the grouped ellipse itself. Here motion perception is driven by emergent motion signals such as those of virtual contours constructed by grouping procedures. Neither these contours nor their emergent motion signals are present in the image.
Collapse
Affiliation(s)
- G P Caplovitz
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | | |
Collapse
|
7
|
Ledgeway T, Hess RF, Geisler WS. Grouping local orientation and direction signals to extract spatial contours: empirical tests of "association field" models of contour integration. Vision Res 2005; 45:2511-22. [PMID: 15890381 DOI: 10.1016/j.visres.2005.04.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2004] [Revised: 02/24/2005] [Accepted: 04/07/2005] [Indexed: 10/25/2022]
Abstract
Over the last decade or so a great deal of psychophysical research has attempted to delineate the principles by which local orientations and motions are combined across space to facilitate the detection of simple spatial contours. This has led to the development of "association field" models of contour detection which suggest that the strength of linking between neighbouring elements in an image, is determined by the degree to which they aligned along smooth (first-order) curves. To test this assumption we used a path detection paradigm to compare the ability of observers to identify the presence of contours defined by either spatial orientation, motion direction or by specific combinations of both types of visual attribute. The relative alignment of the local orientations and/or directions with respect to the axis of the depicted contour was systematically varied. For orientation-defined contours detection was best when the elements were aligned along (parallel with) the contour axis, approached chance levels for obliquely oriented elements and then improved for elements that were orthogonal to the contour axis (i.e., performance was a U-shaped function of degree of orientation misalignment). This pattern of results was found for both straight and curved contours and is not readily explicable in terms of current association field theories. For motion-defined contours, however, performance simply deteriorated as the relative directions of the constituent path elements were progressively misaligned with respect to the contour. Thus the rules by which local orientations are linked to define spatial contours are qualitatively different from those used for linking local directions and each may be mediated by distinct visual mechanisms. When both orientation and motion cues were simultaneously available, contour detection performance was generally enhanced, in a manner that is consistent with probability summation. We suggest that association field models of orientation linking may need to be extended in light of the present findings.
Collapse
Affiliation(s)
- Timothy Ledgeway
- School of Psychology, University of Nottingham, University Park, Nottingham, Nottinghamshire NG7 2RD, UK.
| | | | | |
Collapse
|
8
|
Rainville SJM, Wilson HR. Global shape coding for motion-defined radial-frequency contours. Vision Res 2005; 45:3189-201. [PMID: 16099014 DOI: 10.1016/j.visres.2005.06.033] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2004] [Revised: 06/23/2005] [Accepted: 06/27/2005] [Indexed: 10/25/2022]
Abstract
The visual system is highly skilled at recovering the shape of complex objects defined exclusively by motion cues. But while low-level and high-level mechanisms involved in shape-from-motion have been studied extensively, intermediate computational stages remain poorly understood. In the present study, we used motion-defined radial-frequency contours--or motion RFs--to probe intermediate stages involved in the computation of motion-defined shape. Motion RFs consisted of a virtual circle of Gabor elements whose carriers drifted at speeds determined by a sinusoidal function of polar angle. Motion RFs elicited vivid percepts of shape, and observers could detect and discriminate radial frequencies up to approximately five cycles. Randomizing Gabor speeds over a small contour segment impaired detection and discrimination performance significantly more than predicted by probability summation. Threshold comparisons between spatial-RF and motion-RF contours ruled out that motion-induced shifts in perceived position (i.e., the DeValois effect) determine shape perception in motion RFs. Together, results indicate that the shape of motion RFs is processed by synergistic mechanisms that perform a global analysis of motion cues over space. These results are integrated with data on perceptual interactions between motion RFs and spatial-RFs and are discussed in terms of cue-specific and cue-invariant representations of object shape in human vision.
Collapse
Affiliation(s)
- Stéphane J M Rainville
- Center for Visual Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND 58105-5075, USA.
| | | |
Collapse
|
9
|
Dakin SC, Mareschal I, Bex PJ. Local and global limitations on direction integration assessed using equivalent noise analysis. Vision Res 2005; 45:3027-49. [PMID: 16171844 DOI: 10.1016/j.visres.2005.07.037] [Citation(s) in RCA: 90] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2004] [Revised: 06/28/2005] [Accepted: 07/30/2005] [Indexed: 11/24/2022]
Abstract
We used an equivalent noise (EN) paradigm to examine how the human visual system pools local estimates of direction across space in order to encode global direction. Observers estimated the mean direction (clockwise or counter-clockwise of vertical) of a field of moving band-pass elements whose directions were drawn from a wrapped normal distribution. By measuring discrimination thresholds for mean direction as a function of directional variance, we were able to infer both the precision of observers' representation of each element's direction (i.e., local noise) as well as how many of these estimates they were averaging (i.e., global pooling). We estimated EN for various numbers of moving elements occupying regions of various sizes. We report that both local and global limits on direction integration are determined by the number of elements present in the display (irrespective of their density or the size of region they occupy), and we go on to show how this dependence can be understood in terms of neural noise. Specifically, we use Monte Carlo simulations to show that a maximum-likelihood operator, operating on pooled directional signals from visual cortex corrupted by Poisson noise, accounts for psychophysical data across all conditions tested, as well as motion coherence thresholds (collected under similar experimental conditions). A population vector-averaging scheme (essentially a special case of ML estimation) produces similar predictions but out-performs subjects at high levels of directional variability and fails to predict motion coherence thresholds.
Collapse
Affiliation(s)
- Steven C Dakin
- Department of Visual Science, Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | | | |
Collapse
|
10
|
Ledgeway T, Hess RF. The spatial frequency and orientation selectivity of the mechanisms that extract motion-defined contours. Vision Res 2005; 46:568-78. [PMID: 16182334 DOI: 10.1016/j.visres.2005.08.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2005] [Revised: 07/07/2005] [Accepted: 08/10/2005] [Indexed: 11/26/2022]
Abstract
The human visual system can undertake a specialized form of motion integration, one that enables the presence of extended spatial contours to be disambiguated from their backgrounds. We have shown previously that the visual system can selectively integrate local motion signals when their directions are along spatial contours and its efficiency is inversely related to the curvature of the contour involved (Ledgeway, T., & Hess, R. F. (2002). Vision Research, 42, 653-659). This integration primarily involves the direction, rather than the speed, of local motion signals. In the present study, we sought to investigate both the spatial frequency and orientation tuning of this specialized contour integration process, using a path detection paradigm. The results show that the tuning for spatial frequency is very broad, in line with previous studies that have examined this issue. In contrast, the orientation selectivity of the mechanism mediating contour extraction under these conditions is relatively narrowband. Thus, spatial frequency but not orientation pooling appears to take place prior to the extraction of motion-defined contours, a situation that is different from that previously shown for spatial contours composed of static, oriented elements.
Collapse
Affiliation(s)
- Timothy Ledgeway
- School of Psychology, University of Nottingham, University Park, Nottingham NG7 2RD, UK.
| | | |
Collapse
|
11
|
Peterhans E, Heider B, Baumann R. Neurons in monkey visual cortex detect lines defined by coherent motion of dots. Eur J Neurosci 2005; 21:1091-100. [PMID: 15787714 DOI: 10.1111/j.1460-9568.2005.03919.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Form perception from coherent motion is an important aspect of vision. Representations of one-, two- and three-dimensional forms have been found at various stages of cortical processing using random-dot stimuli, whereas representations of biological objects like a walking human being concentrate at higher stages of processing. The perception of biological objects can be induced by sparse dot stimuli that consist of a few dots that mark the joints of the human body [G. Johansson (1973) Percept. Psychophys., 14, 201-211]. In the present study, we aimed to investigate whether neurons in early visual areas that respond to bars and edges defined by luminance contrast also signal bar-like objects from sparse dot stimuli. We studied single neurons with rows of 3-24 dots that were either collinear or scattered within a rectangular form. These dots were moved coherently on a uniform or dotted background, and human observers perceived them as rigid rods or other bar-like objects. We found neurons in the visual cortex of the awake, behaving monkey that responded to these stimuli and were sensitive to the orientation of these objects as for conventional bars or edges. Stimulus conditions that failed to induce these percepts in human observers also evoked weaker responses or none in these neurons. We found these neurons with increasing frequency in areas V1, V2 and V3/V3A. The results suggest that the visual cortex not only detects biological objects, but also lines and other bar-like objects from sparse dot stimuli, and that this function evolves at an early stage of processing.
Collapse
|
12
|
Abstract
Our ability to identify alphanumeric characters can be impaired by the presence of nearby features, especially when the target is presented in the peripheral visual field, a phenomenon is known as crowding. We measured the effects of motion on acuity and on the spatial extent of crowding. In line with many previous studies, acuity decreased and crowding increased with eccentricity. Acuity also decreased for moving targets, but the absolute size of crowding zones remained relatively invariant of speed at each eccentricity. The two-dimensional shape of crowding zones was measured with a single flanking element on each side of the target. Crowding zones were elongated radially about central vision, relative to tangential zones, and were also asymmetrical: a more peripheral flanking element crowded more effectively than a more foveal one; and a flanking element that moved ahead of the target crowded more effectively than one that trailed behind it. These results reveal asymmetrical space-time dependent regions of visual integration that are radially organised about central vision.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | | | |
Collapse
|