76
|
Michel MM, Jacobs RA. Parameter learning but not structure learning: a Bayesian network model of constraints on early perceptual learning. J Vis 2007; 7:4. [PMID: 17461672 DOI: 10.1167/7.1.4] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2006] [Accepted: 11/27/2006] [Indexed: 11/24/2022] Open
Abstract
Visual scientists have shown that people are capable of perceptual learning in a large variety of circumstances. Are there constraints on such learning? We propose a new constraint on early perceptual learning, namely, that people are capable of parameter learning-they can modify their knowledge of the prior probabilities of scene variables or of the statistical relationships among scene and perceptual variables that are already considered to be potentially dependent-but they are not capable of structure learning-they cannot learn new relationships among variables that are not considered to be potentially dependent, even when placed in novel environments in which these variables are strongly related. These ideas are formalized using the notation of Bayesian networks. We report the results of five experiments that evaluate whether subjects can demonstrate cue acquisition, which means that they can learn that a sensory signal is a cue to a perceptual judgment. In Experiment 1, subjects were placed in a novel environment that resembled natural environments in the sense that it contained systematic relationships among scene and perceptual variables that which are normally dependent. In this case, cue acquisition requires parameter learning and, as predicted, subjects succeeded in learning a new cue. In Experiments 2-5, subjects were placed in novel environments that did not resemble natural environments-they contained systematic relationships among scene and perceptual variables that are not normally dependent. Cue acquisition requires structure learning in these cases. Consistent with our hypothesis, subjects failed to learn new cues in Experiments 2-5. Overall, the results suggest that the mechanisms of early perceptual learning are biased such that people can only learn new contingencies between scene and sensory variables that are considered to be potentially dependent.
Collapse
|
77
|
Ivanchenko V, Jacobs RA. Visual learning by cue-dependent and cue-invariant mechanisms. Vision Res 2007; 47:145-56. [PMID: 17150239 DOI: 10.1016/j.visres.2006.09.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2005] [Revised: 09/13/2006] [Accepted: 09/20/2006] [Indexed: 10/23/2022]
Abstract
We examined learning at multiple levels of the visual system. Subjects were trained and tested on a same/different slant judgment task or a same/different curvature judgment task using simulated planar surfaces or curved surfaces defined by either stereo or monocular (texture and motion) cues. Taken as a whole, the results of four experiments are consistent with the hypothesis that learning takes place at both cue-dependent and cue-invariant levels, and that learning at these levels can have different generalization properties. If so, then cue-invariant mechanisms may mediate the transfer of learning from familiar cue conditions to novel cue conditions, thereby allowing perceptual learning to be robust and efficient. We claim that learning takes place at multiple levels of the visual system, and that a comprehensive understanding of visual perception requires a good understanding of learning at each of these levels.
Collapse
|
78
|
Abstract
A person learning to control a complex system needs to learn about both the dynamics and the noise of the system. We evaluated human subjects' abilities to learn to control a stochastic dynamic system under different noise conditions. These conditions were created by corrupting the forces applied to the system with noise whose magnitudes were either proportional or inversely proportional to the sizes of subjects' control signals. We also used dynamic programming to calculate the mathematically optimal control laws of an "ideal actor" for each noise condition. The results suggest that people learned control strategies tailored to the specific noise characteristics of their training conditions. In particular, as predicted by the ideal actors, they learned to use smaller control signals when forces were corrupted by proportional noise and to use larger signals when forces were corrupted by inversely proportional noise, thereby achieving levels of performance near the information-theoretic upper bounds. We conclude that subjects learned to behave in a near-optimal manner, meaning that they learned to efficiently use all available information to plan and execute control policies that maximized performances on their tasks.
Collapse
|
79
|
Abstract
We consider the properties of motor components, also known as synergies, arising from a computational theory (in the sense of Marr, 1982) of optimal motor behavior. An actor's goals were formalized as cost functions, and the optimal control signals minimizing the cost functions were calculated. Optimal synergies were derived from these optimal control signals using a variant of nonnegative matrix factorization. This was done using two different simulated two--joint arms--an arm controlled directly by torques applied at the joints and an arm in which forces were applied by muscles--and two types of motor tasks-reaching tasks and via-point tasks. Studies of the motor synergies reveal several interesting findings. First, optimal motor actions can be generated by summing a small number of scaled and time-shifted motor synergies, indicating that optimal movements can be planned in a low-dimensional space by using optimal motor synergies as motor primitives or building blocks. Second, some optimal synergies are task independent--they arise regardless of the task context-whereas other synergies are task dependent--they arise in the context of one task but not in the contexts of other tasks. Biological organisms use a combination of task--independent and task--dependent synergies. Our work suggests that this may be an efficient combination for generating optimal motor actions from motor primitives. Third, optimal motor actions can be rapidly acquired by learning new linear combinations of optimal motor synergies. This result provides further evidence that optimal motor synergies are useful motor primitives. Fourth, synergies with similar properties arise regardless if one uses an arm controlled by torques applied at the joints or an arm controlled by muscles, suggesting that synergies, when considered in "movement space," are more a reflection of task goals and constraints than of fine details of the underlying hardware.
Collapse
|
80
|
Michel MM, Jacobs RA. The costs of ignoring high-order correlations in populations of model neurons. Neural Comput 2006; 18:660-82. [PMID: 16483412 DOI: 10.1162/089976606775623298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Investigators debate the extent to which neural populations use pair-wise and higher-order statistical dependencies among neural responses to represent information about a visual stimulus. To study this issue, three statistical decoders were used to extract the information in the responses of model neurons about the binocular disparities present in simulated pairs of left-eye and right-eye images: (1) the full joint probability decoder considered all possible statistical relations among neural responses as potentially important; (2) the dependence tree decoder also considered all possible relations as potentially important, but it approximated high-order statistical correlations using a computationally tractable procedure; and (3) the independent response decoder, which assumed that neural responses are statistically independent, meaning that all correlations should be zero and thus can be ignored. Simulation results indicate that high-order correlations among model neuron responses contain significant information about binocular disparities and that the amount of this high-order information increases rapidly as a function of neural population size. Furthermore, the results highlight the potential importance of the dependence tree decoder to neuroscientists as a powerful but still practical way of approximating high-order correlations among neural responses.
Collapse
|
81
|
Michel MM, Jacobs RA. The Costs of Ignoring High-Order Correlations in Populations of Model Neurons. Neural Comput 2006. [DOI: 10.1162/neco.2006.18.3.660] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Investigators debate the extent to which neural populations use pairwise and higher-order statistical dependencies among neural responses to represent information about a visual stimulus. To study this issue, three statistical decoders were used to extract the information in the responses of model neurons about the binocular disparities present in simulated pairs of left-eye and right-eye images: (1) the full joint probability decoder considered all possible statistical relations among neural responses as potentially important; (2) the dependence tree decoder also considered all possible relations as potentially important, but it approximated high-order statistical correlations using a computationally tractable procedure; and (3) the independent response decoder, which assumed that neural responses are statistically independent, meaning that all correlations should be zero and thus can be ignored. Simulation results indicate that high-order correlations among model neuron responses contain significant information about binocular disparities and that the amount of this high-order information increases rapidly as a function of neural population size. Furthermore, the results highlight the potential importance of the dependence tree decoder to neuroscientists as a powerful but still practical way of approximating high-order correlations among neural responses.
Collapse
|
82
|
Abstract
Variations in blur are present in retinal images of scenes containing objects at multiple depth planes. Here we examine whether neural representations of image blur can be recalibrated as a function of depth. Participants were exposed to textured images whose blur changed with depth in a novel manner. For one group of participants, image blur increased as the images moved closer; for the other group, blur increased as the images moved away. A comparison of post-test versus pre-test performances on a blur-matching task at near and far test positions revealed that both groups of participants showed significant experience-dependent recalibration of the relationship between depth and blur. These results demonstrate that blur adaptation is conditioned by 3D viewing contexts.
Collapse
|
83
|
Abstract
Contrast adaptation that was limited to a small region of the peripheral retina was induced as observers viewed a multiple depth-plane textured surface. The small region undergoing contrast adaptation was present only in one depth-plane to determine whether contrast gain-control is depth-dependent. After adaptation, observers performed a contrast-matching task in both the adapted and a non-adapted depth-plane to measure the magnitude and spatial specificity of contrast adaptation. Results indicated that contrast adaptation was depth-dependent under full-cue (disparity, linear perspective, texture gradient) conditions; there was a highly significant change in contrast gain in the depth-plane of adaptation and no significant gain change in the unadapted depth-plane. A second experiment showed that under some monocular viewing conditions a similar change in contrast gain was present in the adapted depth-plane despite the absence of disparity information for depth. Two control experiments with no-depth displays showed that contrast adaptation can also be texture- and location-dependent, but the magnitude of these effects was significantly smaller than the depth-dependent effect. These results demonstrate that mechanisms of contrast adaptation are conditioned by 3-D and 2-D viewing contexts.
Collapse
|
84
|
Atkins JE, Jacobs RA, Knill DC. Experience-dependent visual cue recalibration based on discrepancies between visual and haptic percepts. Vision Res 2003; 43:2603-13. [PMID: 14552802 DOI: 10.1016/s0042-6989(03)00470-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
We studied the hypothesis that observers can recalibrate their visual percepts when visual and haptic (touch) cues are discordant and the haptic information is judged to be reliable. Using a novel visuo-haptic virtual reality environment, we conducted a set of experiments in which subjects interacted with scenes consisting of two fronto-parallel surfaces. Subjects judged the distance between the two surfaces based on two perceptual cues: a visual stereo cue obtained when viewing the scene binocularly and a haptic cue obtained when subjects grasped the two surfaces between their thumb and index fingers. Visual and haptic cues regarding the scene were manipulated independently so that they could either be consistent or inconsistent. Experiment 1 explored the effect of visuo-haptic inconsistencies on depth-from-stereo estimates. Our findings suggest that when stereo and haptic cues are inconsistent, subjects recalibrate their interpretations of the visual stereo cue so that depth-from-stereo percepts are in greater agreement with depth-from-haptic percepts. In Experiment 2 the visuo-haptic discrepancy took a different form when the two surfaces were near the subject than when they were far from the subject. The results indicate that subjects recalibrated their interpretations of the stereo cue in a context-sensitive manner that depended on viewing distance, thereby making them more consistent with depth-from-haptic estimates at all viewing distances. Together these findings suggest that observers' visual and haptic percepts are tightly coupled in the sense that haptic percepts provide a standard to which visual percepts can be recalibrated when the visual percepts are deemed to be erroneous.
Collapse
|
85
|
Abstract
Bernstein (1967) suggested that people attempting to learn to perform a difficult motor task try to ameliorate the degrees-of-freedom problem through the use of a developmental progression. Early in training, people maintain a subset of their control parameters (e.g., joint positions) at constant settings and attempt to learn to perform the task by varying the values of the remaining parameters. With practice, people refine and improve this early-learned control strategy by also varying those parameters that were initially held constant. We evaluated Bernstein's proposed developmental progression using six neural network systems and found that a network whose training included developmental progressions of both its trajectory and its feedback gains outperformed all other systems. These progressions, however, yielded performance benefits only on motor tasks that were relatively difficult to learn. We conclude that development can indeed aid motor learning.
Collapse
|
86
|
Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2003; 20:1391-1397. [PMID: 12868643 DOI: 10.1364/josaa.20.001391] [Citation(s) in RCA: 235] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Human observers localize events in the world by using sensory signals from multiple modalities. We evaluated two theories of spatial localization that predict how visual and auditory information are weighted when these signals specify different locations in space. According to one theory (visual capture), the signal that is typically most reliable dominates in a winner-take-all competition, whereas the other theory (maximum-likelihood estimation) proposes that perceptual judgments are based on a weighted average of the sensory signals in proportion to each signal's relative reliability. Our results indicate that both theories are partially correct, in that relative signal reliability significantly altered judgments of spatial location, but these judgments were also characterized by an overall bias to rely on visual over auditory information. These results have important implications for the development of cue integration and for neural plasticity in the adult brain that enables humans to optimally integrate multimodal information.
Collapse
|
87
|
Abstract
We compared perceptual learning in 16 psychophysical studies, ranging from low-level spatial frequency and orientation discrimination tasks to high-level object and face-recognition tasks. All studies examined learning over at least four sessions and were carried out foveally or using free fixation. Comparison of learning effects across this wide range of tasks demonstrates that the amount of learning varies widely between different tasks. A variety of factors seems to affect learning, including the number of perceptual dimensions relevant to the task, external noise, familiarity, and task complexity.
Collapse
|
88
|
Abstract
We consider the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. Four models were trained to estimate motion velocities in sequences of visual images. Three of the models were developmental models in the sense that the nature of their visual input changed during the course of training. These models received a relatively impoverished visual input early in training, and the quality of this input improved as training progressed. One model used a coarse-to-multiscale developmental progression (it received coarse-scale motion features early in training and finer-scale features were added to its input as training progressed), another model used a fine-to-multiscale progression, and the third model used a random progression. The final model was nondevelopmental in the sense that the nature of its input remained the same throughout the training period. The simulation results show that the coarse-to-multiscale model performed best. Hypotheses are offered to account for this model's superior performance, and simulation results evaluating these hypotheses are reported. We conclude that suitably designed developmental sequences can be useful to systems learning to estimate motion velocities. The idea that visual development can aid visual learning is a viable hypothesis in need of further study.
Collapse
|
89
|
Dominguez M, Jacobs RA. Developmental constraints aid the acquisition of binocular disparity sensitivities. Neural Comput 2003; 15:161-82. [PMID: 12590824 DOI: 10.1162/089976603321043748] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This article considers the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. We report the results of simulations in which four models were trained to detect binocular disparities in pairs of visual images. Three of the models were developmental models in the sense that the nature of their visual input changed during the course of training. These models received a relatively impoverished visual input early in training, and the quality of this input improved as training progressed. One model used a coarse-scale-to-multiscale developmental progression, another used a fine-scale-to-multiscale progression, and the third used a random progression. The final model was nondevelopmental in the sense that the nature of its input remained the same throughout the training period. The simulation results show that the two developmental models whose progressions were organized by spatial frequency content consistently outperformed the nondevelopmental and random developmental models. We speculate that the superior performance of these two models is due to two important features of their developmental progressions: (1) these models were exposed to visual inputs at a single scale early in training, and (2) the spatial scale of their inputs progressed in an orderly fashion from one scale to a neighboring scale during training. Simulation results consistent with these speculations are presented. We conclude that suitably designed developmental sequences can be useful to systems learning to detect binocular disparities. The idea that visual development can aid visual learning is a viable hypothesis in need of study.
Collapse
|
90
|
Jacobs RA, Jiang W, Tanner MA. Factorial hidden Markov models and the generalized backfitting algorithm. Neural Comput 2002; 14:2415-37. [PMID: 12396569 DOI: 10.1162/08997660260293283] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Previous researchers developed new learning architectures for sequential data by extending conventional hidden Markov models through the use of distributed state representations. Although exact inference and parameter estimation in these architectures is computationally intractable, Ghahramani and Jordan (1997) showed that approximate inference and parameter estimation in one such architecture, factorial hidden Markov models (FHMMs), is feasible in certain circumstances. However, the learning algorithm proposed by these investigators, based on variational techniques, is difficult to understand and implement and is limited to the study of real-valued data sets. This chapter proposes an alternative method for approximate inference and parameter estimation in FHMMs based on the perspective that FHMMs are a generalization of a well-known class of statistical models known as generalized additive models (GAMs; Hastie & Tibshirani, 1990). Using existing statistical techniques for GAMs as a guide, we have developed the generalized backfitting algorithm. This algorithm computes customized error signals for each hidden Markov chain of an FHMM and then trains each chain one at a time using conventional techniques from the hidden Markov models literature. Relative to previous perspectives on FHMMs, we believe that the viewpoint taken here has a number of advantages. First, it places FHMMs on firm statistical foundations by relating them to a class of models that are well studied in the statistics community, yet it generalizes this class of models in an interesting way. Second, it leads to an understanding of how FHMMs can be applied to many different types of time-series data, including Bernoulli and multinomial data, not just data that are real valued. Finally, it leads to an effective learning procedure for FHMMs that is easier to understand and easier to implement than existing learning procedures. Simulation results suggest that FHMMs trained with the generalized backfitting algorithm are a practical and powerful tool for analyzing sequential data.
Collapse
|
91
|
Abstract
Visual environments contain many cues to properties of an observed scene. To integrate information provided by multiple cues in an efficient manner, observers must assess the degree to which each cue provides reliable versus unreliable information. Two hypotheses are reviewed regarding how observers estimate cue reliabilities, namely that the estimated reliability of a cue is related to the ambiguity of the cue, and that people use correlations among cues to estimate cue reliabilities. Cue reliabilities are shown to be important both for cue combination and for aspects of visual learning.
Collapse
|
92
|
Abstract
The integration of information from different sensors, cues, or modalities lies at the very heart of perception. We are studying adaptive phenomena in visual cue integration. To this end, we have designed a visual tracking task, where subjects track a target object among distractors and try to identify the target after an occlusion. Objects are defined by three different attributes (color, shape, size) which change randomly within a single trial. When the attributes differ in their reliability (two change frequently, one is stable), our results show that subjects dynamically adapt their processing. The results are consistent with the hypothesis that subjects rapidly re-weight the information provided by the different cues by emphasizing the information from the stable cue. This effect seems to be automatic, ie not requiring subjects' awareness of the differential reliabilities of the cues. The hypothesized re-weighting seems to take place in about 1 s. Our results suggest that cue integration can exhibit adaptive phenomena on a very fast time scale. We propose a probabilistic model with temporal dynamics that accounts for the observed effect.
Collapse
|
93
|
Bradley SR, Pieribone VA, Wang W, Severson CA, Jacobs RA, Richerson GB. Chemosensitive serotonergic neurons are closely associated with large medullary arteries. Nat Neurosci 2002; 5:401-2. [PMID: 11967547 DOI: 10.1038/nn848] [Citation(s) in RCA: 131] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2002] [Accepted: 03/22/2002] [Indexed: 11/09/2022]
Abstract
We have previously shown that serotonergic neurons of the medulla are strongly stimulated by an increase in CO(2), suggesting that they are central respiratory chemoreceptors. Here we used confocal imaging and electron microscopy to show that neurons immunoreactive for tryptophan hydroxylase (TpOH) are tightly apposed to large arteries in the rat medulla. We used patch-clamp recordings from brain slices to confirm that neurons with this anatomical specialization are chemosensitive. Serotonergic neurons are ideally situated for sensing arterial blood CO(2), and may help maintain pH homeostasis via wide-ranging effects on brain function. The results reported here support a recent proposal that sudden infant death syndrome (SIDS) results from a developmental abnormality of medullary serotonergic neurons.
Collapse
|
94
|
Carroll WF, Berger TC, Borrelli FE, Garrity PJ, Jacobs RA, Ledvina J, Lewis JW, McCreedy RL, Smith TP, Tuhovak DR, Weston AF. Characterization of emissions of dioxins and furans from ethylene dichloride, vinyl chloride monomer and polyvinyl chloride facilities in the United States. Consolidated report. CHEMOSPHERE 2001; 43:689-700. [PMID: 11372854 DOI: 10.1016/s0045-6535(00)00422-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This is the consolidated report of emissions of PCDD/F from facilities in the organic chemical manufacturing chain leading to polyvinyl chloride. Data have been gathered from facilities in the US and Canada from a number of manufacturers and at various steps in the manufacturing process. Estimates of US emissions or transfers of PCDD/F were generated on an "Upper Bound" and "Most Likely" basis. The Most Likely estimate of US emissions of PCDD/F to the open environment, that is, air, water and land surface by facilities in this chain, based on evaluation of non-detects at one-half the detection limit is about 12 g I-TEQ per year. On this same basis, an estimated 19 g is disposed of in secure landfills.
Collapse
|
95
|
Abstract
Our goal was to differentiate low and mid level perceptual learning. We used a complex grating discrimination task that required observers to combine information across wide ranges of spatial frequency and orientation. Stimuli were 'wicker'-like textures containing two orthogonal signal components of 3 and 9 c/deg. Observers discriminated a 15% spatial frequency shift in these components. Stimuli also contained four noise components, separated from the signal components by at least 45 degrees of orientation or approximately 2 octaves in spatial frequency. In Experiment 1 naive observers were trained for eight sessions with a four-alternative same-different forced choice judgment with feedback. Observers showed significant learning, thresholds dropped to approximately 1/3 of their original value. In Experiment 2 we found that observers showed far less learning when the noise components were not present. Experiment 3 found, unlike many other studies, almost complete transfer of learning across orientation. The results of Experiments 2 and 3 suggest that, unlike many other perceptual learning studies, most learning in Experiment 1 occurs at mid to high levels of processing rather than within low level analyzers tuned for spatial frequency and orientation. Experiment 4 found that performance was more severely impaired by spatial frequency shifts in noise components of the same spatial frequency or orientation as the signal components (though there was significant variability between observers). This suggests that after training observers based their responses on mechanisms tuned for selective regions of Fourier space. Experiment 5 examined transfer of learning from a same-sign task (the two signal components both increased/decreased in spatial frequency) to an opposite-sign task (signal components shifted in opposite directions in frequency space). Transfer of learning from same-sign to opposite-sign tasks and vice versa was complete suggesting that observers combined information from the two signal components independently.
Collapse
|
96
|
Atkins JE, Fiser J, Jacobs RA. Experience-dependent visual cue integration based on consistencies between visual and haptic percepts. Vision Res 2001; 41:449-61. [PMID: 11166048 DOI: 10.1016/s0042-6989(00)00254-6] [Citation(s) in RCA: 73] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.
Collapse
|
97
|
|
98
|
Burkiewicz JS, Kostiuk KA, Jacobs RA, Guglielmo BJ. Impact of an intravenous fluconazole restriction policy on patient outcomes. Ann Pharmacother 2001; 35:9-13. [PMID: 11197590 DOI: 10.1345/aph.10161] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
OBJECTIVE To evaluate both the economic and clinical impact of an intravenous fluconazole restriction policy in a university teaching hospital. METHODS Intravenous fluconazole was restricted to patients unable to take oral medications due to significant nausea or to patients whose oral intake was restricted. A retrospective chart review and computerized record review was conducted in patients receiving intravenous or oral fluconazole from January 1 to June 30, 1997, and again from January 1 to June 30, 1998, after implementation of the policy. RESULTS Six-month institutional expenditures for intravenous fluconazole decreased following policy implementation, from $81,900 to $45,400, an estimated annual institutional savings of $73,000. A 47% reduction in the number of patients treated with intravenous fluconazole was observed over the six-month period after policy implementation. During this time, the rate of successful clinical outcomes for documented or suspected disseminated Candida albicans infection or febrile neutropenia remained the same (66.6% prepolicy and 65.9% postpolicy; p = 0.95). Similarly, the number of deaths in patients receiving fluconazole remained unchanged (p = 0.31). CONCLUSIONS A restriction policy for intravenous fluconazole results in significant cost savings, with no significant decrease in successful outcomes or change in mortality.
Collapse
|
99
|
Abstract
Improvements due to perceptual training are often specific to the trained task and do not generalize to similar perceptual tasks. Surprisingly, given this history of highly constrained, context-specific perceptual learning, we found that training on a perceptual task showed significant transfer to a motor task. This result provides evidence for a common neural architecture underlying analysis of sensory input and control of motor output, and suggests a potential role for perception in motor development and rehabilitation.
Collapse
|
100
|
Guglielmo BJ, Luber AD, Paletta D, Jacobs RA. Ceftriaxone therapy for staphylococcal osteomyelitis: a review. Clin Infect Dis 2000; 30:205-7. [PMID: 10619757 DOI: 10.1086/313620] [Citation(s) in RCA: 40] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Ceftriaxone, although less active than standard antistaphylococcal agents, is potentially useful in the treatment of osteomyelitis. Thirty-one patients with osteomyelitis due to Staphylococcus aureus were identified, 22 of whom were treated with ceftriaxone and 9 with other agents. Of those patients treated with ceftriaxone, 17 were cured; all treatment failures were associated with chronic osteomyelitis and continued presence of necrotic bone or infected hardware. It is concluded that ceftriaxone is effective in the ambulatory treatment of S. aureus osteomyelitis.
Collapse
|