26
|
Gepner R, Wolk J, Wadekar DS, Dvali S, Gershow M. Variance adaptation in navigational decision making. eLife 2018; 7:37945. [PMID: 30480547 PMCID: PMC6257812 DOI: 10.7554/elife.37945] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Accepted: 10/29/2018] [Indexed: 11/13/2022] Open
Abstract
Sensory systems relay information about the world to the brain, which enacts behaviors through motor outputs. To maximize information transmission, sensory systems discard redundant information through adaptation to the mean and variance of the environment. The behavioral consequences of sensory adaptation to environmental variance have been largely unexplored. Here, we study how larval fruit flies adapt sensory-motor computations underlying navigation to changes in the variance of visual and olfactory inputs. We show that variance adaptation can be characterized by rescaling of the sensory input and that for both visual and olfactory inputs, the temporal dynamics of adaptation are consistent with optimal variance estimation. In multisensory contexts, larvae adapt independently to variance in each sense, and portions of the navigational pathway encoding mixed odor and light signals are also capable of variance adaptation. Our results suggest multiplication as a mechanism for odor-light integration.
Collapse
|
27
|
Jalali S, Martin SE, Murphy CP, Solomon JA, Yarrow K. Classification Videos Reveal the Visual Information Driving Complex Real-World Speeded Decisions. Front Psychol 2018; 9:2229. [PMID: 30524338 PMCID: PMC6256113 DOI: 10.3389/fpsyg.2018.02229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 10/29/2018] [Indexed: 11/13/2022] Open
Abstract
Humans can rapidly discriminate complex scenarios as they unfold in real time, for example during law enforcement or, more prosaically, driving and sport. Such decision-making improves with experience, as new sources of information are exploited. For example, sports experts are able to predict the outcome of their opponent's next action (e.g., a tennis stroke) based on kinematic cues "read" from preparatory body movements. Here, we explore the use of psychophysical classification-image techniques to reveal how participants interpret complex scenarios. We used sport as a test case, filming tennis players serving and hitting ground strokes, each with two possible directions. These videos were presented to novices and club-level amateurs, running from 0.8 s before to 0.2 s after racquet-ball contact. During practice, participants anticipated shot direction under a time limit targeting 90% accuracy. Participants then viewed videos through Gaussian windows ("bubbles") placed at random in the temporal, spatial or spatiotemporal domains. Comparing bubbles from correct and incorrect trials revealed how information from different regions contributed toward a correct response. Temporally, only later frames of the videos supported accurate responding (from ~0.05 s before ball contact to 0.1 s afterwards). Spatially, information was accrued from the ball's trajectory and from the opponent's head. Spatiotemporal bubbles again highlighted ball trajectory information, but seemed susceptible to an attentional cuing artifact, which may caution against their wider use. Overall, bubbles proved effective in revealing regions of information accrual, and could thus be applied to help understand choice behavior in a range of ecologically valid situations.
Collapse
|
28
|
Strategic and Dynamic Temporal Weighting for Perceptual Decisions in Humans and Macaques. eNeuro 2018; 5:eN-NWR-0169-18. [PMID: 30406190 PMCID: PMC6220584 DOI: 10.1523/eneuro.0169-18.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 08/08/2018] [Accepted: 09/01/2018] [Indexed: 12/14/2022] Open
Abstract
Perceptual decision-making is often modeled as the accumulation of sensory evidence over time. Recent studies using psychophysical reverse correlation have shown that even though the sensory evidence is stationary over time, subjects may exhibit a time-varying weighting strategy, weighting some stimulus epochs more heavily than others. While previous work has explained time-varying weighting as a consequence of static decision mechanisms (e.g., decision bound or leak), here we show that time-varying weighting can reflect strategic adaptation to stimulus statistics, and thus can readily take a number of forms. We characterized the temporal weighting strategies of humans and macaques performing a motion discrimination task in which the amount of information carried by the motion stimulus was manipulated over time. Both species could adapt their temporal weighting strategy to match the time-varying statistics of the sensory stimulus. When early stimulus epochs had higher mean motion strength than late, subjects adopted a pronounced early weighting strategy, where early information was weighted more heavily in guiding perceptual decisions. When the mean motion strength was greater in later stimulus epochs, in contrast, subjects shifted to a marked late weighting strategy. These results demonstrate that perceptual decisions involve a temporally flexible weighting process in both humans and monkeys, and introduce a paradigm with which to manipulate sensory weighting in decision-making tasks.
Collapse
|
29
|
Pleskac TJ, Yu S, Hopwood C, Liu T. Mechanisms of deliberation during preferential choice: Perspectives from computational modeling and individual differences. ACTA ACUST UNITED AC 2018; 6:77-107. [PMID: 30643838 DOI: 10.1037/dec0000092] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Computational models of decision making typically assume as people deliberate between options they mentally simulate outcomes from each one and integrate valuations of these outcomes to form a preference. In two studies, we investigated this deliberation process using a task where participants make a series of decisions between a certain and an uncertain option, which were shown as dynamic visual samples that represented possible payoffs. We developed and validated a method of reverse correlational analysis for the task that measures how this time-varying signal was used to make a choice. The first study used this method to examine how information processing during deliberation differed from a perceptual analog of the task. We found participants were less sensitive to each sample of information during preferential choice. In a second study, we investigated how these different measures of deliberation were related to impulsivity and drug and alcohol use. We found that while properties of the deliberation process were not related to impulsivity, some aspects of the process may be related to substance use. In particular, alcohol abuse was related to diminished sensitivity to the payoff information and drug use was related to how the initial starting point of evidence accumulation. We synthesized our results with a rank-dependent sequential sampling model which suggests that participants allocated more attentional weight to larger potential payoffs during preferential choice.
Collapse
|
30
|
Liu M, Sharma AK, Shaevitz JW, Leifer AM. Temporal processing and context dependency in Caenorhabditis elegans response to mechanosensation. eLife 2018; 7:e36419. [PMID: 29943731 PMCID: PMC6054533 DOI: 10.7554/elife.36419] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 06/10/2018] [Indexed: 11/13/2022] Open
Abstract
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps to reveal the brain's underlying computations. We investigate how the nematode Caenorhabditis elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. We find that the behavioral response is tuned to temporal properties of mechanosensory signals, such as their integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animal's response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animal's behavioral response to stimulus.
Collapse
|
31
|
Rychlowska M, Jack RE, Garrod OGB, Schyns PG, Martin JD, Niedenthal PM. Functional Smiles: Tools for Love, Sympathy, and War. Psychol Sci 2017; 28:1259-1270. [PMID: 28741981 DOI: 10.1177/0956797617706082] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.
Collapse
|
32
|
Crosse MJ, Di Liberto GM, Bednar A, Lalor EC. The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Front Hum Neurosci 2016; 10:604. [PMID: 27965557 PMCID: PMC5127806 DOI: 10.3389/fnhum.2016.00604] [Citation(s) in RCA: 271] [Impact Index Per Article: 33.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 11/11/2016] [Indexed: 01/05/2023] Open
Abstract
Understanding how brains process sensory signals in natural environments is one of the key goals of twenty-first century neuroscience. While brain imaging and invasive electrophysiology will play key roles in this endeavor, there is also an important role to be played by noninvasive, macroscopic techniques with high temporal resolution such as electro- and magnetoencephalography. But challenges exist in determining how best to analyze such complex, time-varying neural responses to complex, time-varying and multivariate natural sensory stimuli. There has been a long history of applying system identification techniques to relate the firing activity of neurons to complex sensory stimuli and such techniques are now seeing increased application to EEG and MEG data. One particular example involves fitting a filter—often referred to as a temporal response function—that describes a mapping between some feature(s) of a sensory stimulus and the neural response. Here, we first briefly review the history of these system identification approaches and describe a specific technique for deriving temporal response functions known as regularized linear regression. We then introduce a new open-source toolbox for performing this analysis. We describe how it can be used to derive (multivariate) temporal response functions describing a mapping between stimulus and response in both directions. We also explain the importance of regularizing the analysis and how this regularization can be optimized for a particular dataset. We then outline specifically how the toolbox implements these analyses and provide several examples of the types of results that the toolbox can produce. Finally, we consider some of the limitations of the toolbox and opportunities for future development and application.
Collapse
|
33
|
Ince RAA, Jaworska K, Gross J, Panzeri S, van Rijsbergen NJ, Rousselet GA, Schyns PG. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex 2016; 26:4123-4135. [PMID: 27550865 PMCID: PMC5066825 DOI: 10.1093/cercor/bhw196] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features.
Collapse
|
34
|
Inagaki M, Sasaki KS, Hashimoto H, Ohzawa I. Subspace mapping of the three-dimensional spectral receptive field of macaque MT neurons. J Neurophysiol 2016; 116:784-95. [PMID: 27193321 DOI: 10.1152/jn.00934.2015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Accepted: 05/18/2016] [Indexed: 11/22/2022] Open
Abstract
Neurons in the middle temporal (MT) visual area are thought to represent the velocity (direction and speed) of motion. Previous studies suggest the importance of both excitation and suppression for creating velocity representation in MT; however, details of the organization of excitation and suppression at the MT stage are not understood fully. In this article, we examine how excitatory and suppressive inputs are pooled in individual MT neurons by measuring their receptive fields in a three-dimensional (3-D) spatiotemporal frequency domain. We recorded the activity of single MT neurons from anesthetized macaque monkeys. To achieve both quality and resolution of the receptive field estimations, we applied a subspace reverse correlation technique in which a stimulus sequence of superimposed multiple drifting gratings was cross-correlated with the spiking activity of neurons. Excitatory responses tended to be organized in a manner representing a specific velocity independent of the spatial pattern of the stimuli. Conversely, suppressive responses tended to be distributed broadly over the 3-D frequency domain, supporting a hypothesis of response normalization. Despite the nonspecific distributed profile, the total summed strength of suppression was comparable to that of excitation in many MT neurons. Furthermore, suppressive responses reduced the bandwidth of velocity tuning, indicating that suppression improves the reliability of velocity representation. Our results suggest that both well-organized excitatory inputs and broad suppressive inputs contribute significantly to the invariant and reliable representation of velocity in MT.
Collapse
|
35
|
Abstract
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Collapse
|
36
|
Feature-based face representations and image reconstruction from behavioral and neural data. Proc Natl Acad Sci U S A 2015; 113:416-21. [PMID: 26711997 DOI: 10.1073/pnas.1514551112] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach.
Collapse
|
37
|
Elijah DH, Samengo I, Montemurro MA. Thalamic neuron models encode stimulus information by burst-size modulation. Front Comput Neurosci 2015; 9:113. [PMID: 26441623 PMCID: PMC4585143 DOI: 10.3389/fncom.2015.00113] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 08/28/2015] [Indexed: 11/13/2022] Open
Abstract
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Collapse
|
38
|
Piché M, Thomas S, Casanova C. Spatiotemporal profiles of receptive fields of neurons in the lateral posterior nucleus of the cat LP-pulvinar complex. J Neurophysiol 2015; 114:2390-403. [PMID: 26289469 DOI: 10.1152/jn.00649.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 08/16/2015] [Indexed: 11/22/2022] Open
Abstract
The pulvinar is the largest extrageniculate thalamic visual nucleus in mammals. It establishes reciprocal connections with virtually all visual cortexes and likely plays a role in transthalamic cortico-cortical communication. In cats, the lateral posterior nucleus (LP) of the LP-pulvinar complex can be subdivided in two subregions, the lateral (LPl) and medial (LPm) parts, which receive a predominant input from the striate cortex and the superior colliculus, respectively. Here, we revisit the receptive field structure of LPl and LPm cells in anesthetized cats by determining their first-order spatiotemporal profiles through reverse correlation analysis following sparse noise stimulation. Our data reveal the existence of previously unidentified receptive field profiles in the LP nucleus both in space and time domains. While some cells responded to only one stimulus polarity, the majority of neurons had receptive fields comprised of bright and dark responsive subfields. For these neurons, dark subfields' size was larger than that of bright subfields. A variety of receptive field spatial organization types were identified, ranging from totally overlapped to segregated bright and dark subfields. In the time domain, a large spectrum of activity overlap was found, from cells with temporally coinciding subfield activity to neurons with distinct, time-dissociated subfield peak activity windows. We also found LP neurons with space-time inseparable receptive fields and neurons with multiple activity periods. Finally, a substantial degree of homology was found between LPl and LPm first-order receptive field spatiotemporal profiles, suggesting a high integration of cortical and subcortical inputs within the LP-pulvinar complex.
Collapse
|
39
|
Encoding of yaw in the presence of distractor motion: studies in a fly motion sensitive neuron. J Neurosci 2015; 35:6481-94. [PMID: 25904799 DOI: 10.1523/jneurosci.4256-14.2015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motion estimation is crucial for aerial animals such as the fly, which perform fast and complex maneuvers while flying through a 3-D environment. Motion-sensitive neurons in the lobula plate, a part of the visual brain, of the fly have been studied extensively for their specialized role in motion encoding. However, the visual stimuli used in such studies are typically highly simplified, often move in restricted ways, and do not represent the complexities of optic flow generated during actual flight. Here, we use combined rotations about different axes to study how H1, a wide-field motion-sensitive neuron, encodes preferred yaw motion in the presence of stimuli not aligned with its preferred direction. Our approach is an extension of "white noise" methods, providing a framework that is readily adaptable to quantitative studies into the coding of mixed dynamic stimuli in other systems. We find that the presence of a roll or pitch ("distractor") stimulus reduces information transmitted by H1 about yaw, with the amount of this reduction depending on the variance of the distractor. Spike generation is influenced by features of both yaw and the distractor, where the degree of influence is determined by their relative strengths. Certain distractor features may induce bidirectional responses, which are indicative of an imbalance between global excitation and inhibition resulting from complex optic flow. Further, the response is shaped by the dynamics of the combined stimulus. Our results provide intuition for plausible strategies involved in efficient coding of preferred motion from complex stimuli having multiple motion components.
Collapse
|
40
|
Jones PR, Moore DR, Amitay S. Development of auditory selective attention: why children struggle to hear in noisy environments. Dev Psychol 2015; 51:353-69. [PMID: 25706591 PMCID: PMC4337492 DOI: 10.1037/a0038570] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 11/11/2014] [Accepted: 11/17/2014] [Indexed: 11/29/2022]
Abstract
Children's hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4-11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9-11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments.
Collapse
|
41
|
Abstract
Complex animal behaviors are built from dynamical relationships between sensory inputs, neuronal activity, and motor outputs in patterns with strategic value. Connecting these patterns illuminates how nervous systems compute behavior. Here, we study Drosophila larva navigation up temperature gradients toward preferred temperatures (positive thermotaxis). By tracking the movements of animals responding to fixed spatial temperature gradients or random temperature fluctuations, we calculate the sensitivity and dynamics of the conversion of thermosensory inputs into motor responses. We discover three thermosensory neurons in each dorsal organ ganglion (DOG) that are required for positive thermotaxis. Random optogenetic stimulation of the DOG thermosensory neurons evokes behavioral patterns that mimic the response to temperature variations. In vivo calcium and voltage imaging reveals that the DOG thermosensory neurons exhibit activity patterns with sensitivity and dynamics matched to the behavioral response. Temporal processing of temperature variations carried out by the DOG thermosensory neurons emerges in distinct motor responses during thermotaxis.
Collapse
|
42
|
Meso AI, Chemla S. Perceptual fields reveal previously hidden dynamics of human visual motion sensitivity. J Neurophysiol 2014; 114:1360-3. [PMID: 25339713 DOI: 10.1152/jn.00698.2014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Accepted: 10/21/2014] [Indexed: 11/22/2022] Open
Abstract
Motion sensitivity is a fundamental property of human vision. Although its neural correlates are normally only directly accessible with neurophysiological approaches, Neri (Neri P. J Neurosci 34: 8449-8491, 2014) proposed psychophysical reverse correlation to derive perceptual fields, revealing previously unseen dynamics of human motion detection. In this Neuro Forum, these key findings are discussed, putting them into broader context and pointing out possible implications of spatial scale considerations on the interpretation of the findings and dynamic model proposed.
Collapse
|
43
|
Abstract
In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.
Collapse
|
44
|
Pernet CR, Belin P, Jones A. Behavioral evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli. Front Psychol 2014; 4:1018. [PMID: 24474943 PMCID: PMC3893619 DOI: 10.3389/fpsyg.2013.01018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Accepted: 12/23/2013] [Indexed: 11/29/2022] Open
Abstract
Both voice gender perception and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left vs. right hemisphere and anterior vs. posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female) and phonemes (/pa/ vs. /ta/) using the same stimulus continua generated by morphing. This allowed the investigation of behavioral differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes) than the gender task (the same person producing 2 phonemes), results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar average response (percentages) and perceptual (d') curves, a reverse correlation analysis on acoustic features revealed that only the vowel formant frequencies distinguish stimuli in the gender task, whilst, as expected, the formant frequencies of the consonant distinguished stimuli in the phoneme task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.
Collapse
|
45
|
Abstract
To understand how different spatial frequencies contribute to the overall perceived contrast of complex, broadband photographic images, we adapted the classification image paradigm. Using natural images as stimuli, we randomly varied relative contrast amplitude at different spatial frequencies and had human subjects determine which images had higher contrast. Then, we determined how the random variations corresponded with the human judgments. We found that the overall contrast of an image is disproportionately determined by how much contrast is between 1 and 6 c/°, around the peak of the contrast sensitivity function (CSF). We then employed the basic components of contrast psychophysics modeling to show that the CSF alone is not enough to account for our results and that an increase in gain control strength toward low spatial frequencies is necessary. One important consequence of this is that contrast constancy, the apparent independence of suprathreshold perceived contrast and spatial frequency, will not hold during viewing of natural images. We also found that images with darker low-luminance regions tended to be judged as having higher overall contrast, which we interpret as the consequence of darker local backgrounds resulting in higher band-limited contrast response in the visual system.
Collapse
|
46
|
Ethier-Majcher C, Joubert S, Gosselin F. Reverse correlating trustworthy faces in young and older adults. Front Psychol 2013; 4:592. [PMID: 24046755 PMCID: PMC3763214 DOI: 10.3389/fpsyg.2013.00592] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Accepted: 08/15/2013] [Indexed: 11/13/2022] Open
Abstract
Little is known about how older persons determine if someone deserves their trust or not based on their facial appearance, a process referred to as “facial trustworthiness.”In the past few years, Todorov and colleagues have argued that, in young adults, trustworthiness judgments are an extension of emotional judgments, and therefore, that trust judgments are made based on a continuum between anger and happiness (Todorov, 2008; Engell et al., 2010). Evidence from the literature on emotion processing suggest that older adults tend to be less efficient than younger adults in the recognition of negative facial expressions (Calder et al., 2003; Firestone et al., 2007; Ruffman et al., 2008; Chaby and Narme, 2009). Based on Todorov';s theory and the fact that older adults seem to be less efficient than younger adults in identifying emotional expressions, one could expect that older individuals would have different representations of trustworthy faces and that they would use different cues than younger adults in order to make such judgments. We verified this hypothesis using a variation of Mangini and Biederman's (2004) reverse correlation method in order to test and compare classification images resulting from trustworthiness (in the context of money investment), from happiness, and from anger judgments in two groups of participants: young adults and older healthy adults. Our results show that for elderly participants, both happy and angry representations are correlated with trustworthiness judgments. However, in young adults, trustworthiness judgments are mainly correlated with happiness representations. These results suggest that young and older adults differ in their way of judging trustworthiness.
Collapse
|
47
|
Nagai T, Ono Y, Tani Y, Koida K, Kitazaki M, Nakauchi S. Image regions contributing to perceptual translucency: A psychophysical reverse-correlation study. Iperception 2013; 4:407-28. [PMID: 24349699 PMCID: PMC3859557 DOI: 10.1068/i0576] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 07/26/2013] [Indexed: 11/21/2022] Open
Abstract
The spatial luminance relationship between shading patterns and specular highlight is suggested to be a cue for perceptual translucency (Motoyoshi, 2010). Although local image features are also important for translucency perception (Fleming & Bulthoff, 2005), they have rarely been investigated. Here, we aimed to extract spatial regions related to translucency perception from computer graphics (CG) images of objects using a psychophysical reverse-correlation method. From many trials in which the observer compared the perceptual translucency of two CG images, we obtained translucency-related patterns showing which image regions were related to perceptual translucency judgments. An analysis of the luminance statistics calculated within these image regions showed that (1) the global rms contrast within an entire CG image was not related to perceptual translucency and (2) the local mean luminance of specific image regions within the CG images correlated well with perceptual translucency. However, the image regions contributing to perceptual translucency differed greatly between observers. These results suggest that perceptual translucency does not rely on global luminance statistics such as global rms contrast, but rather depends on local image features within specific image regions. There may be some “hot spots” effective for perceptual translucency, although which of many hot spots are used in judging translucency may be observer dependent.
Collapse
|
48
|
Imhoff R, Woelki J, Hanke S, Dotsch R. Warmth and competence in your face! Visual encoding of stereotype content. Front Psychol 2013; 4:386. [PMID: 23825468 PMCID: PMC3695562 DOI: 10.3389/fpsyg.2013.00386] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2013] [Accepted: 06/10/2013] [Indexed: 11/13/2022] Open
Abstract
Previous research suggests that stereotypes about a group's warmth bias our visual representation of group members. Based on the stereotype content model (SCM) the current research explored whether the second big dimension of social perception, competence, is also reflected in visual stereotypes. To test this, participants created typical faces for groups either high in warmth and low in competence (male nursery teachers) or vice versa (managers) in a reverse correlation image classification task, which allows for the visualization of stereotypes without any a priori assumptions about relevant dimensions. In support of the independent encoding of both SCM dimensions hypotheses-blind raters judged the resulting visualizations of nursery teachers as warmer but less competent than the resulting image for managers, even when statistically controlling for judgments on one dimension. People thus seem to use facial cues indicating both relevant dimensions to make sense of social groups in a parsimonious, non-verbal and spontaneous manner.
Collapse
|
49
|
Wallis TSA, Bex PJ. Image correlates of crowding in natural scenes. J Vis 2012; 12:6. [PMID: 22798053 PMCID: PMC4503217 DOI: 10.1167/12.7.6] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2012] [Accepted: 05/29/2012] [Indexed: 11/24/2022] Open
Abstract
Visual crowding is the inability to identify visible features when they are surrounded by other structure in the peripheral field. Since natural environments are replete with structure and most of our visual field is peripheral, crowding represents the primary limit on vision in the real world. However, little is known about the characteristics of crowding under natural conditions. Here we examine where crowding occurs in natural images. Observers were required to identify which of four locations contained a patch of "dead leaves'' (synthetic, naturalistic contour structure) embedded into natural images. Threshold size for the dead leaves patch scaled with eccentricity in a manner consistent with crowding. Reverse correlation at multiple scales was used to determine local image statistics that correlated with task performance. Stepwise model selection revealed that local RMS contrast and edge density at the site of the dead leaves patch were of primary importance in predicting the occurrence of crowding once patch size and eccentricity had been considered. The absolute magnitudes of the regression weights for RMS contrast at different spatial scales varied in a manner consistent with receptive field sizes measured in striate cortex of primate brains. Our results are consistent with crowding models that are based on spatial averaging of features in the early stages of the visual system, and allow the prediction of where crowding is likely to occur in natural images.
Collapse
|
50
|
Nestor A, Vettel JM, Tarr MJ. Internal representations for face detection: an application of noise-based image classification to BOLD responses. Hum Brain Mapp 2012; 34:3101-15. [PMID: 22711230 DOI: 10.1002/hbm.22128] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2011] [Revised: 04/22/2012] [Accepted: 04/23/2012] [Indexed: 11/10/2022] Open
Abstract
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.
Collapse
|