101
|
|
102
|
Dinsmoor's selective observing hypothesis probably cannot account for a preference for unpredictable rewards: DMOD can. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00023074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
103
|
Abstract
AbstractWe present a general framework for analyzing the contribution to reproductive success of a behavioural action. An action may make a direct contribution to reproductive success, but even in the absence of a direct contribution it may make an indirect contribution by changing the animal's state. We consider actions over a period of time, and define a reward function that characterizes the relationship between the animal's state at the end of the period and its future reproductive success. Working back from the end of the period using dynamic programming, the optimal action as a function of state and time can be found. The procedure also yields a measure of the cost, in terms of future reproductive success, of a suboptimal action. These costs provide us with a common currency for comparing activities such as eating and drinking, or eating and hiding from predators. The costs also give an indication of the robustness of the conclusions that can be drawn from a model. We review how our framework can be used to analyze optimal foraging decisions in a stochastic environment. We also discuss the modelling of optimal daily routines and provide an illustration based on singing to attract a mate. We use the model to investigate the features that can produce a dawn song burst in birds. State is defined very broadly so that it includes the information an animal has about its environment. Thus, exploration and learning can be included within the framework.
Collapse
|
104
|
|
105
|
|
106
|
|
107
|
|
108
|
|
109
|
Abstract
Naming appears to be the source of the explosion in language development and involves the integration of the initially separate listener and speaker responses. This integration has a role in the development of reading, writing, and the following and construction of verbal algorithms that make types of complex human behavior possible. Considerable research has investigated the role of Naming in the emergence of derived relations. Recent research has also investigated the emergence of Naming itself. We describe these experiments and the experiences that function to induce Naming. We also describe evidence about preverbal developmental cusps that are foundational to the emergence of Naming and the evidence on its reinforcement sources. The isolation of the role of the environment in the emergence of Naming identifies stimuli that were said to be missing in accounts that were critical of Skinner's (1957) account of verbal behavior. These arguments purported that the phenomenon was not attributable to learning because of the "poverty of the stimulus." Some of the relevant stimuli now appear to be identified.
Collapse
Affiliation(s)
| | - Jennifer Longano
- Correspondence should be addressed to R. Douglas Greer, Box 76, Teachers College, Columbia University, 525 West 120th Street, New York, New York 10027 (e-mail: )
| |
Collapse
|
110
|
Isaksen J, Holth P. An operant approach to teaching joint attention skills to children with autism. BEHAVIORAL INTERVENTIONS 2009. [DOI: 10.1002/bin.292] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
111
|
Bromberg-Martin ES, Hikosaka O. Midbrain dopamine neurons signal preference for advance information about upcoming rewards. Neuron 2009; 63:119-26. [PMID: 19607797 DOI: 10.1016/j.neuron.2009.06.009] [Citation(s) in RCA: 281] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2009] [Revised: 04/20/2009] [Accepted: 06/05/2009] [Indexed: 11/25/2022]
Abstract
The desire to know what the future holds is a powerful motivator in everyday life, but it is unknown how this desire is created by neurons in the brain. Here we show that when macaque monkeys are offered a water reward of variable magnitude, they seek advance information about its size. Furthermore, the same midbrain dopamine neurons that signal the expected amount of water also signal the expectation of information, in a manner that is correlated with the strength of the animal's preference. Our data show that single dopamine neurons process both primitive and cognitive rewards, and suggest that current theories of reward-seeking must be revised to include information-seeking.
Collapse
Affiliation(s)
- Ethan S Bromberg-Martin
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | |
Collapse
|
112
|
Raiff BR, Dallery J. The generality of nicotine as a reinforcer enhancer in rats: effects on responding maintained by primary and conditioned reinforcers and resistance to extinction. Psychopharmacology (Berl) 2008; 201:305-14. [PMID: 18695928 DOI: 10.1007/s00213-008-1282-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2008] [Accepted: 07/28/2008] [Indexed: 12/22/2022]
Abstract
RATIONALE Nicotine may enhance the reinforcing value of other reinforcers. It is unclear whether nicotine enhances responding maintained by all reinforcers or whether there are limits to this role. OBJECTIVE The objective of the study is to test the generality of nicotine-induced increases in reinforced responding by using an observing response procedure, which generated measures of responding maintained by food reinforcers, conditioned reinforcers, and responding during extinction. We also examined whether nicotine increased resistance to extinction and whether nicotine's effects could be characterized as rate-dependent. MATERIALS AND METHODS Rats received presession subcutaneous injections of Vehicle (n=5), 0.3 (n=6), or 0.56 (n=6) mg/kg nicotine for 70 sessions. Resistance to extinction was also assessed by removing food for five sessions. RESULTS Nicotine did not consistently affect food or extinction responding. Both doses of nicotine produced increases in responding maintained by conditioned reinforcers, but did not increase resistance to extinction. Predrug response rates accounted for a small but significant percentage of the variance in the drug effect. CONCLUSION Although there was a tendency for nicotine to increase low predrug response rates (i.e., response rates just prior to nicotine administration), 0.3 and 0.56 mg/kg nicotine systematically increased responding maintained by conditioned reinforcers. The results are consistent with a reinforcer-enhancing role of nicotine. However, nicotine did not increase resistance to extinction, nor did it increase food-maintained responses. Nicotine may selectively increase responding maintained by moderately reinforcing stimuli, such as the conditioned reinforcers used in the present study.
Collapse
Affiliation(s)
- Bethany R Raiff
- Department of Psychology, University of Florida, P.O. BOX 112250, Gainesville, FL 32601, USA.
| | | |
Collapse
|
113
|
DeFulio A, Hackenberg TD. Combinations of response-dependent and response-independent schedule-correlated stimulus presentation in an observing procedure. J Exp Anal Behav 2008; 89:299-309. [PMID: 18540216 DOI: 10.1901/jeab.2008.89-299] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Pigeons pecked a response key on a variable-interval (VI) schedule, in which responses produced food every 40 s, on average. These VI periods, or components, alternated in irregular fashion with extinction components in which food was unavailable. Pecks on a second (observing) key briefly produced exteroceptive stimuli (houselight flashes) correlated with the component schedule currently in effect. Across conditions within a phase, the dependency between observing and presentation of the stimuli was decreased systematically while the density of stimulus presentation was held constant. Across phases, the proportion of session time spent in the VI component was adjusted from 0.5 to 0.25, and then to 0.75. Results indicate that rate of observing decreased as the dependency between responses and stimulus presentations was decreased. Further, discriminative control by the schedule-correlated stimuli was systematically weakened as dependency was decreased. Increasing the proportion of session time spent in VI decreased the rate of observing. This effect was additive with the manipulation of the dependency between observing and presentation of the stimuli. Overall, these results show that conditioned reinforcers function similarly to unconditioned reinforcers with respect to response-consequence dependencies, and that stimulus control is enhanced under conditions in which the relevant stimuli are produced by an organism's behavior.
Collapse
|
114
|
Abstract
Three experiments examined the effects of conditioned reinforcement value and primary reinforcement rate on resistance to change using a multiple schedule of observing-response procedures with pigeons. In the absence of observing responses in both components, unsignaled periods of variable-interval (VI) schedule food reinforcement alternated with extinction. Observing responses in both components intermittently produced 15 s of a stimulus associated with the VI schedule (i.e., S+). In the first experiment, a lower-valued conditioned reinforcer and a higher rate of primary reinforcement were arranged in one component by adding response-independent food deliveries uncorrelated with S+. In the second experiment, one component arranged a lower valued conditioned reinforcer but a higher rate of primary reinforcement by increasing the probability of VI schedule periods relative to extinction periods. In the third experiment, the two observing-response components provided similar rates of primary reinforcement but arranged different valued conditioned reinforcers. Across the three experiments, observing-response rates were typically higher in the component associated with the higher valued conditioned reinforcer. Resistance to change was not affected by conditioned reinforcement value, but was an orderly function of the rate of primary reinforcement obtained in the two components. One interpretation of these results is that S+ value does not affect response strength and that S+ deliveries increase response rates through a mechanism other than reinforcement. Alternatively, because resistance to change depends on the discriminative stimulus-reinforcer relation, the failure of S+ value to impact resistance to change could have resulted from a lack of transfer of S+ value to the broader discriminative context.
Collapse
Affiliation(s)
- Timothy A Shahan
- Department of Psychology, Utah State University, Logan, UT 84322, USA.
| | | |
Collapse
|
115
|
Broomfield L, McHugh L, Reed P. The effect of observing response procedures on the reduction of over-selectivity in a match to sample task: immediate but not long term benefits. RESEARCH IN DEVELOPMENTAL DISABILITIES 2008; 29:217-34. [PMID: 17512167 DOI: 10.1016/j.ridd.2007.04.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2007] [Revised: 04/11/2007] [Accepted: 04/11/2007] [Indexed: 05/15/2023]
Abstract
Stimulus over-selectivity occurs when only one of potentially many aspects of the environment comes to control behavior. In three experiments, adult participants with no developmental disabilities were trained and tested in a match to samples (MTS) paradigm. Participants in Experiment 1 were assigned to one of two conditions, which differed on whether an observing response procedure was in place. Findings indicated that an MTS procedure can induce over-selectivity in this population if a time delay is included between sample and comparison. Over-selectivity emerged significantly more in the group who did not use an observing response procedure. In Experiments 2 and 3, participants were exposed to a re-test phase, in which the initial stimuli were presented again, but without the use of an observing response in either group. The observing response procedure only reduced over-selectivity when in place, but performance did not remain high following its withdrawal. This effect was noted regardless of the type of observing response procedure used (pointing versus naming). These findings suggest that an observing response procedure may be effective in reducing over-selectivity, however, these effects do not last post intervention, and that this may limit the clinical usefulness of the technique.
Collapse
|
116
|
Greer RD, Singer-Dudek J. The emergence of conditioned reinforcement from observation. J Exp Anal Behav 2008; 89:15-29. [PMID: 18338673 DOI: 10.1901/jeab.2008.89-15] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
We report an experiment in which observations of peers by six 3-5-year-old participants under specific conditions functioned to convert a small plastic disc or, for one participant, a small piece of string, from a nonreinforcer to a reinforcer. Prior to the observational procedure, we compared each participant's responding on (a) previously acquired performance tasks in which the child received either a preferred food item or the disc (string) for correct responses, and (b) the acquisition of new repertoires in which the disc (string) was the consequence for correct responses. Verbal corrections followed incorrect responses in the latter tasks. The results showed that discs and strings did not reinforce correct responses in the performance tasks, but the food items did; nor did the discs and strings reinforce correct responses in learning new repertoires. We then introduced the peer observation condition in which participants engaged in a different performance task in the presence of a peer who also performed the task. A partition blocked the participants from seeing the peers' performance. However, participants could observe peers receiving discs or strings. Participants did not receive discs or strings regardless of their performance. Peer observation continued until the participants either requested discs or strings repeatedly, or attempted to take discs or strings from the peers. Following the peer observation condition, the same performance and acquisition tasks in which participants had engaged prior to observation were repeated. The results showed that the discs and strings now reinforced correct responding for both performance and acquisition for all participants. We discuss the results with reference to research involving nonhuman subjects that demonstrated the observational conditioning of reinforcers.
Collapse
Affiliation(s)
- R Douglas Greer
- Box 76, Teachers College, Columbia University, New York, NY 10027, USA
| | | |
Collapse
|
117
|
Quantitative analyses of observing and attending. Behav Processes 2008; 78:145-57. [PMID: 18304761 DOI: 10.1016/j.beproc.2008.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2007] [Accepted: 01/17/2008] [Indexed: 11/23/2022]
Abstract
We review recent experiments examining whether simple models of the allocation and persistence of operant behavior are applicable to attending. In one series of experiments, observing responses of pigeons were used as an analog of attending. Maintenance of observing is often attributed to the conditioned reinforcing effects of a food-correlated stimulus (i.e., S+), so these experiments also may inform our understanding of conditioned reinforcement. Rates and allocations of observing were governed by rates of food or S+ delivery in a manner consistent with the matching law. Resistance to change of observing was well described by behavioral momentum theory only when rates of primary reinforcement in the context were considered. Rate and value of S+ deliveries did not affect resistance to change. Thus, persistence of attending to stimuli appears to be governed by primary reinforcement rates in the training context rather than conditioned reinforcing effects of the stimuli. An additional implication of these findings is that conditioned "reinforcers" may affect response rates through some mechanism other than response-strengthening. In a second series of experiments, we examined the applicability of the matching law to the allocation of attending to the elements of compound stimuli in a divided-attention task. The generalized matching law described performance well, and sensitivity to relative reinforcement varied with sample duration. The bias and sensitivity terms of the generalized matching law may provide measures of stimulus-driven and goal-driven control of divided attention. Further application of theories of operant behavior to performance on attention tasks may provide insights into what is referred to variously as endogenous, top-down, or goal-directed control of attention.
Collapse
|
118
|
Abstract
We review the nature of conditioned reinforcement, including evidence that conditioned reinforcers maintain choice behavior in concurrent schedules and that they elevate responding in the terminal links of concurrent-chains schedules. A question has resurfaced recently: Do theories of choice in concurrent-chains schedules need to include a term reflecting greater preference for higher rates of conditioned reinforcement? The review of several studies addressing this point suggests that such a term is inappropriate. Elevated rates of conditioned reinforcement (and responding) in the terminal links of concurrent-chains schedules do not lead to greater preference in the initial link leading to the higher rate of conditioned reinforcement. If anything, the opposite preference is likely to occur. This result is not surprising, since the additional putative conditioned reinforcers in the terminal link are not correlated with a reduction in time to primary reinforcement nor with an increase in value.
Collapse
Affiliation(s)
- Edmund Fantino
- Department of Psychology-0109, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0109, USA.
| | | |
Collapse
|
119
|
Fantino E, Gaitan S, Kennelly A, Stolarz-Fantino S. How reinforcer type affects choice in economic games. Behav Processes 2007; 75:107-14. [PMID: 17353099 DOI: 10.1016/j.beproc.2007.02.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2006] [Revised: 12/08/2006] [Accepted: 12/08/2006] [Indexed: 10/23/2022]
Abstract
Behavioral economists stress that experiments on judgment and decision-making using economic games should be played with real money if the results are to have generality. Behavior analysts have sometimes disputed this contention and have reported results in which hypothetical rewards and real money have produced comparable outcomes. We review studies that have compared hypothetical and real money and discuss the results of two relevant experiments. In the first, using the Sharing Game developed in our laboratory, subjects' choices differed markedly depending on whether the rewards were real or hypothetical. In the second, using the Ultimatum and Dictator Games, we again found sharp differences between real and hypothetical rewards. However, this study also showed that time off from a tedious task could serve as a reinforcer every bit as potent as money. In addition to their empirical and theoretical contributions, these studies make the methodological point that meaningful studies may be conducted with economic games without spending money: time off from a tedious task can serve as a powerful reward.
Collapse
Affiliation(s)
- Edmund Fantino
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093-0109, USA.
| | | | | | | |
Collapse
|
120
|
Affiliation(s)
- William Timberlake
- Department of Psychology, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405-7007, USA.
| |
Collapse
|
121
|
|
122
|
Shahan TA, Jimenez-Gomez C. Effects of self-administered alcohol concentration on the frequency and persistence of rats' attending to alcohol cues. Behav Pharmacol 2006; 17:201-11. [PMID: 16571998 DOI: 10.1097/00008877-200605000-00001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Observing responses produce contact with stimuli that are to be discriminated and have been considered an animal model of attending. In the observing-response procedure, alternating periods of drug availability versus extinction for one response are not signaled, but a second response (i.e. the observing response) produces stimuli signaling whether drug is available or not. This experiment examined the effects of the concentration of self-administered alcohol and increases in observing-response requirement on rats' observing alcohol stimuli. In addition, the effects of alcohol concentration on the persistence of observing were examined when alcohol was no longer available. Results showed that observing tracked bitonic changes in the number of alcohol deliveries rather than monotonic increases in total alcohol consumption resulting from increases in alcohol concentration. Increasing the observing-response requirement decreased the number of stimulus presentations earned. The resultant decreases in time spent in the presence of the alcohol stimulus were associated with decreases in alcohol consumption. During extinction of alcohol responding, observing was more persistent when it produced a stimulus previously associated with a higher alcohol concentration. Finally, responding for alcohol was more resistant to extinction in the presence of an observing-response-produced alcohol stimulus than in its absence, but did not depend on alcohol concentration. These results suggest that increases in the difficulty of obtaining access to alcohol cues can decrease alcohol consumption by reducing contact with those cues. In addition, if observing behavior in the present procedure is analogous to attending to alcohol cues, the results suggest that attending to alcohol cues is more persistent with cues previously associated with higher doses, and that the persistence of attending to alcohol cues and their impact on drinking may be dissociable.
Collapse
Affiliation(s)
- Timothy A Shahan
- Department of Psychology, Utah State University, Logan, Utah 84322, USA.
| | | |
Collapse
|
123
|
Abstract
Attempts to examine the effects of variations in relative conditioned reinforcement rate on choice have been confounded by changes in rates of primary reinforcement or changes in the value of the conditioned reinforcer. To avoid these problems, this experiment used concurrent observing responses to examine sensitivity of choice to relative conditioned reinforcement rate. In the absence of observing responses, unsignaled periods of food delivery on a variable-interval 90-s schedule alternated with extinction on a center key (i.e., a mixed schedule was in effect). Two concurrently available observing responses produced 15-s access to a stimulus differentially associated with the schedule of food delivery (S+). The relative rate of S+ deliveries arranged by independent variable-interval schedules for the two observing responses varied across conditions. The relation between the ratio of observing responses and the ratio of S+ deliveries was well described by the generalized matching law, despite the absence of changes in the rate of food delivery. In addition, the value of the S+ deliveries likely remained constant across conditions because the ratio of S+ to mixed schedule food deliveries remained constant. Assuming that S+ deliveries serve as conditioned reinforcers, these findings are consistent with the functional similarity between primary and conditioned reinforcers suggested by general choice theories based on the concatenated matching law (e.g., contextual choice and hyperbolic value-added models). These findings are inconsistent with delay reduction theory, which has no terms for the effects of rate of conditioned reinforcement in the absence of changes in rate of primary reinforcement.
Collapse
Affiliation(s)
- Timothy A Shahan
- Utah State University, Department of Psychology, 2810 Old Main Hill, Logan, Utah 84322, USA.
| | | | | |
Collapse
|
124
|
Raiff BR, Dallery J. Effects of acute and chronic nicotine on responses maintained by primary and conditioned reinforcers in rats. Exp Clin Psychopharmacol 2006; 14:296-305. [PMID: 16893272 DOI: 10.1037/1064-1297.14.3.296] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is growing recognition that nonnicotine factors, such as the sensory stimuli associated with smoking, can play a critical role in the maintenance of cigarette smoking. However, little is known about the effects of nicotine on responding maintained by these stimuli, which are assumed to be conditioned reinforcers. The authors used an animal model to examine the acute and chronic effects of nicotine on responses maintained by food and conditioned reinforcers (i.e., lights) and responses in the absence of programmed consequences (i.e., extinction). During the acute phase, 4 male rats received 5 doses of subcutaneous nicotine. One dose of nicotine was then administered for a minimum of 60 days. Food-maintained and extinction responses did not significantly increase during the acute phase; however, food-maintained responses did increase during the chronic phase. Relative to vehicle, intermediate doses increased responses maintained by conditioned reinforcers during both phases. The results suggest that nicotine enhances responding maintained by conditioned reinforcers and possibly by food.
Collapse
Affiliation(s)
- Bethany R Raiff
- Department of Psychology, University of Florida, Gainesville, FL 32601, USA.
| | | |
Collapse
|
125
|
Frank AJ, Wasserman EA. Response rate is not an effective mediator of learned stimulus equivalence in pigeons. Learn Behav 2006; 33:287-95. [PMID: 16396076 DOI: 10.3758/bf03192858] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We explored response rate as a possible mediator of learned stimulus equivalence. Five pigeons were trained to discriminate four clip art pictures presented during a 10-sec discrete-trial fixed interval (FI) schedule: two paired with a one-pellet reinforcer, which supported a low rate of responding, and two paired with a nine-pellet reinforcer, which supported a high rate of responding. After subjects associated one stimulus from each of these pairs with a discriminative choice response, researchers presented two new clip art stimuli during a 10-sec FI: one trained with a differential reinforcement of low rate schedule (DRL) after the FI and the other trained with a differential reinforcement of high rate schedule (DRH) after the FI. Each of the stimuli that were withheld during choice training was later shown to see if the choice responses would transfer to these stimuli. The results suggest that response rate alone does not mediate learned stimulus equivalence.
Collapse
Affiliation(s)
- Andrea J Frank
- University of Iowa, Department of Psychology, E11 Seasore Hall, Iowa City, IA 52242, USA.
| | | |
Collapse
|
126
|
Abstract
As instances of behavior, words interact with environments. But they also interact with each other and with other kinds of behavior. Because of the interlocking nature of the contingencies into which words enter, their behavioral properties may become increasingly removed from nonverbal contingencies, and their relationship to those contingencies may become distorted by the social contingencies that maintain verbal behavior. Verbal behavior is an exceedingly efficient way in which one organism can change the behavior of another. All other functions of verbal behavior derive from this most basic function, sometimes called verbal governance. Functional verbal antecedents in verbal governance may be extended across time and space when individuals replicate the verbal behavior of others or their own verbal behavior. Differential contact with different verbal antecedents may follow from differential attention to verbal stimuli correlated with consequential events. Once in place, verbal behavior can be shaped by (usually social) consequences. Because these four verbal processes (verbal governance, replication, differential attention, and verbal shaping) share common stimulus and response terms, they produce interlocking contingencies in which extensive classes of behavior come to be dominated by verbal antecedents. Very different consequences follow from verbal behavior depending on whether it is anchored to environmental events, as in scientific verbal practices, or becomes independent of it, as in religious fundamentalism.
Collapse
|
127
|
Shahan TA, Podlesnik CA. Rate of conditioned reinforcement affects observing rate but not resistance to change. J Exp Anal Behav 2005; 84:1-17. [PMID: 16156134 PMCID: PMC1243893 DOI: 10.1901/jeab.2005.83-04] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
The effects of rate of conditioned reinforcement on the resistance to change of operant behavior have not been examined. In addition, the effects of rate of conditioned reinforcement on the rate of observing have not been adequately examined. In two experiments, a multiple schedule of observing-response procedures was used to examine the effects of rate of conditioned reinforcement on observing rates and resistance to change. In a rich component, observing responses produced a higher frequency of stimuli correlated with alternating periods of random-interval schedule primary reinforcement or extinction. In a lean component, observing responses produced similar schedule-correlated stimuli but at a lower frequency. The rate of primary reinforcement in both components was the same. In Experiment 1, a 4:1 ratio of stimulus production was arranged by the rich and lean components. In Experiment 2, the ratio of stimulus production rates was increased to 6:1. In both experiments, observing rates were higher in the rich component than in the lean component. Disruptions in observing produced by presession feeding, extinction of observing responses, and response-independent food deliveries during intercomponent intervals usually were similar in the rich and lean components. When differences in resistance to change did occur, observing tended to be more resistant to change in the lean component. If resistance to change is accepted as a more appropriate measure of response strength than absolute response rates, then the present results provide no evidence that higher rates of stimuli generally considered to function as conditioned reinforcers engender greater response strength.
Collapse
Affiliation(s)
- Timothy A Shahan
- Utah State University, Department of Psychology, 2810 Old Main Hill, Logan, Utah 84322, USA.
| | | |
Collapse
|
128
|
Panlilio LV, Weiss SJ. Sensory modality and stimulus control in the pigeon: Cross-species generality of single-incentive selective-association effects. LEARNING AND MOTIVATION 2005. [DOI: 10.1016/j.lmot.2004.11.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
129
|
Abstract
In Skinner's Reflex Reserve theory, reinforced responses added to a reserve depleted by responding. It could not handle the finding that partial reinforcement generated more responding than continuous reinforcement, but it would have worked if its growth had depended not just on the last response but also on earlier responses preceding a reinforcer, each weighted by delay. In that case, partial reinforcement generates steady states in which reserve decrements produced by responding balance increments produced when reinforcers follow responding. A computer simulation arranged schedules for responses produced with probabilities proportional to reserve size. Each response subtracted a fixed amount from the reserve and added an amount weighted by the reciprocal of the time to the next reinforcer. Simulated cumulative records and quantitative data for extinction, random-ratio, random-interval, and other schedules were consistent with those of real performances, including some effects of history. The model also simulated rapid performance transitions with changed contingencies that did not depend on molar variables or on differential reinforcement of inter-response times. The simulation can be extended to inhomogeneous contingencies by way of continua of reserves arrayed along response and time dimensions, and to concurrent performances and stimulus control by way of different reserves created for different response classes.
Collapse
Affiliation(s)
- A Charles Catania
- Department of Psychology [corrected] University of Maryland [corrected] Baltimore County (UMBC), Baltimore [corrected] MD 21250 [corrected] USA. [corrected]
| |
Collapse
|
130
|
The two-alternative observing response procedure in rats: Preference for nondiscriminative stimuli and the effect of delay. LEARNING AND MOTIVATION 2004. [DOI: 10.1016/j.lmot.2004.04.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
131
|
Abstract
Two experiments are reported in which the ratio of the average times spent in the terminal and initial links (Tt/Ti) in concurrent chains was varied. In Experiment 1, pigeons responded in a three-component procedure in which terminal-link variable-interval schedules were in constant ratio, but their average duration increased across components by a factor of two. The log initial-link response ratio was a negatively accelerated function of Tt/Ti. Overall, the data were well described by Grace's (1994) contextual choice model (CCM) with temporal context represented as (Tt/Ti)k or 2Tt/(Tt + Ti), and by Mazur's (2001) hyperbolic value-added model (HVA), with each model accounting for approximately 93% of the variance. In Experiment 2, fixed-parameter predictions for each model were generated, based on the data from Experiment 1, for conditions in which Tt/Ti was varied over a more extreme range. Data were consistent with the predictions of CCM with temporal context represented as 2Tt/(Tt + Ti) and to a lesser extent as (Tt/Ti)k, but not with HVA. Overall, these results suggest that preference increases as a hyperbolic function of Tt/Ti when terminal-link duration is increased relative to initial-link duration, with the terminal-link schedule ratio held constant.
Collapse
Affiliation(s)
- Randolph C Grace
- University of Canterbury, Department of Psychology, Private Bag 4800, Christchurch, New Zealand.
| |
Collapse
|
132
|
Abstract
Observing responses produce contact with discriminative stimuli and have been considered analogous to attending. Many studies have examined the effects of reinforcement rate on the resistance to change of simple operant behavior, but nothing is known about the resistance to change of observing. Two experiments examined the effects of primary reinforcement rate on the resistance to change of observing behavior of pigeons. In Experiment 1, a multiple schedule of observing-response procedures was arranged. In a rich component, observing responses produced stimuli correlated with a high rate of random-interval (RI) reinforcement or extinction. In a lean component, observing responses produced stimuli correlated with a lower rate of RI reinforcement or extinction. In both components, observing responses produced the multiple-schedule stimuli on a fixed-interval 0.75-s schedule. In Experiment 2, a similar procedure was used, but observing in the rich and lean components produced schedule-correlated stimuli on an RI 15-s schedule. Observing in the rich component occurred at a higher rate and was more resistant to disruptions produced by presession feeding and response-independent food deliveries during intercomponent intervals. Despite more frequent observing during unsignaled periods of extinction than unsignaled periods of RI reinforcement, observing during extinction periods was less resistant to change. In addition, replicating the usual result, responding on the food key was generally more resistant to change in the presence of stimuli associated with higher reinforcement rates. These results suggest that quantitative descriptions of resistance to change derived with simple food-maintained responding may be applicable to observing, and perhaps by extension, to attending.
Collapse
Affiliation(s)
- Timothy A Shahan
- Department of Psychology, Utah State University, Logan 84322, USA.
| | | | | |
Collapse
|
133
|
Shahan TA. Stimuli produced by observing responses make rats' ethanol self-administration more resistant to price increases. Psychopharmacology (Berl) 2003; 167:180-6. [PMID: 12632250 DOI: 10.1007/s00213-003-1390-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2002] [Accepted: 12/14/2002] [Indexed: 10/20/2022]
Abstract
RATIONALE Observing responses bring sensory receptors into contact with environmental stimuli. In the observing-response procedure, periods in which an operant response (e.g. pressing a lever) is reinforced by drug deliveries alternate with periods in which this response is never reinforced (i.e. extinction). These alternating periods of drug availability versus extinction are not signaled. Observing responses (i.e. presses on a second lever) produce brief stimuli signaling whether drug is available or not for responses on the first lever. Little is known about how parameters of the drug reinforcer affect drug-stimulus observing. OBJECTIVES The effects of changes in the unit price (responses/reinforcer magnitude) of self-administered ethanol on rats' observing were examined. Also, the effects of an observing-response-produced ethanol stimulus on ethanol consumption were examined by comparing consumption during signaled and unsignaled periods of ethanol availability. METHODS Rats self-administered oral ethanol in the observing-response procedure. The unit price of ethanol in the observing-response procedure was increased by increasing the response requirement for ethanol across conditions. RESULTS Observing and response rates on the ethanol lever increased and then decreased with increases in the unit price of ethanol. However, ethanol-lever responding and ethanol consumption during periods when ethanol was available were less sensitive to increases in price when the observing-response-produced ethanol stimulus was present. CONCLUSIONS Observing varies as an orderly function of unit price of a drug reinforcer, and drug stimuli produced by observing responses can make drug consumption less sensitive to increases in price. This procedure may provide an animal model of both attending to drug stimuli and the resultant effects of these stimuli on drug taking.
Collapse
Affiliation(s)
- Timothy A Shahan
- Department of Psychology, University of New Hampshire, Durham, NH 03824, USA.
| |
Collapse
|
134
|
Palya WL, Bowers MT. Stimulus control in fixed interfood intervals. Learn Behav 2003; 31:22-34. [PMID: 18450067 DOI: 10.3758/bf03195968] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2002] [Accepted: 09/12/2002] [Indexed: 11/08/2022]
Abstract
The control exerted by various portions of fixed-time and fixed-interval schedules was assessed with a trace-conditioning procedure. The intervals were segmented into 10 bins. In all but 1 of those bins, the stimuli were presented in different random orders on each trial. In 1 bin, the stimulus was the same on each trial. The position of this trace stimulus was varied across phases. The results indicated that a trace stimulus can come to control behavior and that differential control can extend to even the second tenth of an interfood interval. The results were interpreted as indicating that traditional explanations of the rate loss in earlier portions of an interfood interval are inadequate and that models such as Palya's (1993) bipolar model or Miller and Schachtman's (1985) comparator model may provide a principled framework with which to understand within-trial effects.
Collapse
Affiliation(s)
- William L Palya
- Department of Psychology, Jacksonville State University, Jacksonville, Alabama 36265, USA.
| | | |
Collapse
|
135
|
Abstract
Four experiments examined the free-operant observing behavior of rats. In Experiment 1, observing was a bitonic function of random-ratio schedule requirements for the primary reinforcer. In Experiment 2, decreases in the magnitude of the primary reinforcer decreased observing. Experiment 3 examined observing when a random-ratio schedule or a yoked random-time schedule of primary reinforcement was in effect across conditions. Removing the response requirement for the primary reinforcer increased observing, suggesting that the effects of the random-ratio schedule in Experiment 1 likely were due to an interaction between observing and responding for the primary reinforcer. In Experiment 4, decreasing the rate of primary reinforcement by increasing the duration of a random-time schedule decreased observing monotonically. Overall, these results suggest that observing decreases with decreases in the rate or magnitude of the primary reinforcer, but that behavior related to the primary reinforcer can affect observing and potentially affect measurement of conditioned reinforcing value.
Collapse
|
136
|
Oliveira-Castro J, Faria J, Dias M, Coelho D. Effects of task complexity on learning to skip steps: an operant analysis. Behav Processes 2002; 59:101. [PMID: 12176178 DOI: 10.1016/s0376-6357(02)00087-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
In most response sequences auxiliary responses stop occurring as training increases. Auxiliary responses are precurrent responses that increase the likelihood of reinforcement for subsequent responding, are not required by the programmed contingencies, and occur in situations in which transfer of stimulus control is not prevented. For example, when someone is learning to solve arithmetic problems, some steps, such as writing down intermediate calculations, are skipped as training increases. A paired-associates task was used to investigate the decrease of auxiliary response, in which participants had to learn the second member (arbitrary characters) of each pair upon being presented with the first member (different shapes), and could look up an auxiliary screen (auxiliary response) in order to do so. Task complexity was varied by changing the average programmed frequency of reinforcement for individual responses (Experiment 1) and response sequences (Experiment 3), the programmed probability of reinforcement for responses given a position (PPRPos) with a fixed (Experiment 2) or variable number of associated pairs (Experiment 4), and the programmed probability of reinforcement for responses given a shape with fixed (Experiment 5) or variable (Experiment 6) number of characters per shape. Increases in these variables produced systematic decreases in the duration of auxiliary behavior necessary to learn the task. These results suggest that some aspects of task complexity can be measured based upon the quantification of the programmed contingencies of reinforcement.
Collapse
Affiliation(s)
- Jorge Oliveira-Castro
- Universidade de Brasi;lia, Inst. de Psicologia Dept. de Processos Psicologicos Basicos, Campus Universitario Darcy Ribeiro, 70910-900, DF, Brasi;lia, Brazil
| | | | | | | |
Collapse
|
137
|
Abstract
Four rats obtained food pellets by poking a key and 5-s presentations of the discriminative stimuli by pressing a lever. Every 1 or 2 min, the prevailing schedule of reinforcement for key poking alternated between rich (either variable-interval [VI] 30 s or VI 60 s) and lean (either VI 240 s, VI 480 s, or extinction) components. While the key was dark (mixed-schedule stimulus), no exteroceptive stimulus indicated the prevailing schedule. A lever press (i.e., an observing response), however, illuminated the key for 5 s with either a steady light (S+), signaling the rich reinforcement schedule, or a blinking light (S-), signaling the lean reinforcement schedule. One goal was to determine whether rats would engage in selective observing (i.e., a pattern of responding that maintains contact with S+ and decreases contact with S-). Such a pattern was found, in that a 5-s presentation of S+ was followed relatively quickly by another observing response (which likely produced another 5-s period of S+), whereas exposure to S- resulted in extended breaks from observing. Additional conditions demonstrated that the rate of observing remained high when lever presses were effective only when the rich reinforcement schedule was in effect (S+ only condition), but decreased to a low level when lever presses were effective only during the lean reinforcement component (S- only condition) or when lever presses had no effect (in removing the mixed stimulus or presenting the multiple-schedule stimuli). These findings are consistent with relativistic conceptualizations of conditioned reinforcement and extend the generality of selective observing to procedures in which the experimenter controls the duration of stimulus presentations, the schedule components both offer intermittent food reinforcement, and rats serve as subjects.
Collapse
Affiliation(s)
- Scott T Gaynor
- Western Michigan University and University of North Carolina at Greensboro, USA.
| | | |
Collapse
|
138
|
Grimes JA, Shull RL. Response-independent milk delivery enhances persistence of pellet-reinforced lever pressing by rats. J Exp Anal Behav 2001; 76:179-94. [PMID: 11599638 PMCID: PMC1284833 DOI: 10.1901/jeab.2001.76-179] [Citation(s) in RCA: 44] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
If, during training, one stimulus is correlated with a higher rate of reinforcement than another, responding will be more resistant to extinction in the presence of that higher rate signal, even if many of the reinforcers have been presented independently of responding. For the present study we asked if the response-independent reinforcers must be the same as the response-dependent reinforcers to enhance the response's persistence. Twelve Long-Evans hooded rats obtained 45-mg food pellets by lever pressing (variable-interval 100-s schedules) in the presence of two discriminative stimuli (blinking vs. steady lights) that alternated every minute during daily sessions. Also, in the presence of one of the stimuli (counterbalanced across rats), the rats received additional response-independent deliveries of sweetened condensed milk (a variable-time schedule). Extinction sessions were exactly like training sessions except that neither pellets nor milk were presented. Lever pressing was more resistant to extinction in the presence of the milk-correlated stimulus when (a) the size of the milk deliveries during training (under a variable-time 30 s schedule) was 0.04 ml (vs. 0.01 ml) and (b) 120-s or 240-s blackouts separated components. Response-independent reinforcers do not have to be the same as the response-dependent reinforcers to enhance persistence.
Collapse
Affiliation(s)
- J A Grimes
- Department of Psychology, University of North Carolina at Greensboro, 27402-6164, USA.
| | | |
Collapse
|
139
|
Differential Conditioning Based on a Difference in the Predictability of Reinforcement. LEARNING AND MOTIVATION 2000. [DOI: 10.1006/lmot.2000.1063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
140
|
Urcuioli PJ, DeMarse TB, Lionello KM. Sample-duration effects on pigeons' delayed matching as a function of predictability of duration. J Exp Anal Behav 1999; 72:279-97. [PMID: 10605100 PMCID: PMC1284741 DOI: 10.1901/jeab.1999.72-279] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Three experiments assessed the impact of sample duration on pigeons' delayed matching as a function of whether or not the samples themselves signaled how long they would remain on. When duration was uncorrelated with the sample appearing on each matching trial, the typical effect of duration was observed: Choice accuracy was higher with long (15-s) than with short (5-s) durations. By contrast, this difference either disappeared or reversed when the 5- and 15-s durations were correlated with the sample stimuli. Sample duration itself cued comparison choice by some birds in the latter (predictable) condition when duration was also correlated with the reinforced choice alternatives. However, even when duration could not provide a cue for choice, pigeons matched predictably short-duration samples as accurately as, or more accurately than, predictably long-duration samples. Moreover, this result was observed independently of whether the contextual conditions of the retention interval were the same as, or different from, those of the intertrial interval. These results strongly support the view that conditional stimulus control by the samples is partly a function of their conditioned reinforcing properties, as determined by the relative reduction in overall delay to reinforcement that they signal.
Collapse
Affiliation(s)
- P J Urcuioli
- Department of Psychological Sciences, Purdue University, West Lafayette, Indiana 47907-1364, USA.
| | | | | |
Collapse
|
141
|
Observing Behavior in Pigeons: The Effect of Reinforcement Probability and Response Cost Using a Symmetrical Choice Procedure. LEARNING AND MOTIVATION 1999. [DOI: 10.1006/lmot.1999.1030] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
142
|
Clement TS, Weaver JE, Sherburne LM, Zentall TR. Simultaneous discrimination learning in pigeons: value of S- affects the relative value of its associated S+. THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY. B, COMPARATIVE AND PHYSIOLOGICAL PSYCHOLOGY 1998; 51:363-78. [PMID: 9854439 DOI: 10.1080/713932684] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
In a simple simultaneous discrimination involving a positive stimulus (S+) and a negative stimulus (S-), it has been hypothesized that positive value can transfer from the S+ to the S- (thus increasing the relative value of the S-) and also that negative value can transfer from the S- to the S+ (thus diminishing the relative value of the S+; Fersen, Wynne, Delius, & Staddon, 1991). Evidence for positive value transfer has been reported in pigeons (e.g. Zentall & Sherburne, 1994). The purpose of the present experiments was to determine, in a simultaneous discrimination, whether the S- diminishes the value of the S+ or the S- is contrasted with the S+ (thus enhancing the value of the S+). In two experiments, we found evidence for contrast, rather than value transfer, attributable to simultaneous discrimination training. Thus, not only does the S+ appear to enhance the value of the S-, but the S- appears to enhance rather than reduce the value of the S+.
Collapse
Affiliation(s)
- T S Clement
- Department of Psychology, University of Kentucky, Lexington 40506-0044, USA
| | | | | | | |
Collapse
|
143
|
Pigeons' Observing Behavior and Response-Independent Food Presentations. LEARNING AND MOTIVATION 1998. [DOI: 10.1006/lmot.1998.1002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
144
|
Schaal DW, Shahan TA, Kovera CA, Reilly MP. Mechanisms underlying the effects of unsignaled delayed reinforcement on key pecking of pigeons under variable-interval schedules. J Exp Anal Behav 1998; 69:103-22. [PMID: 9540229 PMCID: PMC1284652 DOI: 10.1901/jeab.1998.69-103] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Three experiments were conducted to test an interpretation of the response-rate-reducing effects of unsignaled nonresetting delays to reinforcement in pigeons. According to this interpretation, rates of key pecking decrease under these conditions because key pecks alternate with hopper-observing behavior. In Experiment 1, 4 pigeons pecked a food key that raised the hopper provided that pecks on a different variable-interval-schedule key met the requirements of a variable-interval 60-s schedule. The stimuli associated with the availability of the hopper (i.e., houselight and keylight off, food key illuminated, feedback following food-key pecks) were gradually removed across phases while the dependent relation between hopper availability and variable-interval-schedule key pecks was maintained. Rates of pecking the variable-interval-schedule key decreased to low levels and rates of food-key pecks increased when variable-interval-schedule key pecks did not produce hopper-correlated stimuli. In Experiment 2, pigeons initially pecked a single key under a variable-interval 60-s schedule. Then the dependent relation between hopper presentation and key pecks was eliminated by arranging a variable-time 60-s schedule. When rates of pecking had decreased to low levels, conditions were changed so that pecks during the final 5 s of each interval changed the keylight color from green to amber. When pecking produced these hopper-correlated stimuli, pecking occurred at high rates, despite the absence of a peck-food dependency. When peck-produced changes in keylight color were uncorrelated with food, rates of pecking fell to low levels. In Experiment 3, details (obtained delays, interresponse-time distributions, eating times) of the transition from high to low response rates produced by the introduction of a 3-s unsignaled delay were tracked from session to session in 3 pigeons that had been initially trained to peck under a conventional variable-interval 60-s schedule. Decreases in response rates soon after the transition to delayed reinforcement were accompanied by decreases in eating times and alterations in interresponse-time distributions. As response rates decreased and became stable, eating times increased and their variability decreased. These findings support an interpretation of the effects of delayed reinforcement that emphasizes the importance of hopper-observing behavior.
Collapse
Affiliation(s)
- D W Schaal
- Department of Psychology, West Virginia University, Morgantown 26506-6040, USA
| | | | | | | |
Collapse
|
145
|
Abstract
Stimulus control was evaluated in 3 individuals with moderate to severe mental retardation by delayed identity matching-to-sample procedures that presented either one or two discrete forms as sample stimuli on each trial. On pretests, accuracy scores on one-sample trials were uniformly high. On two-sample trials, the correct stimulus (i.e., the one that subsequently appeared in the comparison array) varied unpredictably, and accuracy scores were substantially lower, suggesting that both sample stimuli did not exert stimulus control on every trial. Subjects were then given training sessions with the one-sample task and with a new set of four stimuli. For two of the stimuli, correct matching responses were followed by reinforcers on a variable-ratio schedule that led to a high reinforcer rate. For the other two stimuli, correct responses were followed by reinforcers on a variable-ratio schedule that led to a substantially lower reinforcer rate. Results on two-sample tests that followed showed that (a) on trials in which comparison arrays consisted of one high reinforcer-rate and one low reinforcer-rate stimulus, subjects most often selected the high-rate stimulus; and (b) on trials in which the comparison arrays were either two high reinforcer-rate stimuli or two low reinforcer-rate stimuli and the samples were one high reinforcer- and one low reinforcer-rate stimulus, accuracy was higher on trials with the high-rate comparisons. These results indicate that the frequency of stimulus control by high reinforcer-rate samples was greater than that by low reinforcer-rate samples. Following more training with the one-sample task and reversed reinforcement schedules for all stimuli, the differences in stimulus control frequencies on two-sample tests also reversed. These results demonstrate experimental control by reinforcement contingencies of which of two sample stimuli controlled selections in the two-sample task. The procedures and results may prove to be relevant for understanding restricted stimulus control and stimulus overselectivity.
Collapse
Affiliation(s)
- W V Dube
- Behavioral Sciences Division, E. K. Shriver Center, Waltham, Massachusetts 02254, USA.
| | | |
Collapse
|
146
|
Abstract
Some years ago Underwood (1964) grappled with the problem of explaining his finding that rate of forgetting was not a function of the rate of learning but rather seemed to reflect the level of learning achieved. He likened different rates of learning to filling an Erlenmeyer flask of water at different rates and the process of forgetting to the rate of evaporation, which in turn is a function of the exposed surface area. Since an Erlenmeyer flask is cone-shaped, the surface area becomes smaller as the flask is filled, thus the greater the amount of learning achieved, or water added, the less the rate of evaporation independent of how quickly or slowly the flask was filled. I give this example because it is such a clear description of history kept simple, in the psychological process of learning and forgetting. Indeed it is as simple as Charles Dickens' description of how students are to be taught, that is, by considering them to be "little vessels...ready to have imperial gallons of facts poured into them until they were full to the brim" (Dickens, 1961, p. 12). The object of this paper is to show how our neglect in specifying the history of reinforcement and other behavior analytic concepts has resulted in our ceding much of our field to cognitive psychologists even though our knowledge of conditioning enables us to study it more thoroughly than they can.
Collapse
Affiliation(s)
- K Salzinger
- Department of Psychology, Hofstra University, Hempstead, NY 11550, USA
| |
Collapse
|
147
|
Abstract
A discrete-trials adjusting-delay procedure was used to investigate the conditions under which pigeons might show a preference for partial reinforcement over 100% reinforcement, an effect reported in a number of previous experiments. A peck on a red key always led to a delay with red houselights and then food. In each condition, the duration of the red-houselight delay was adjusted to estimate an indifference point. In 100% reinforcement conditions, a peck on a green key always led to a delay with green houselights and then food. In partial-reinforcement conditions, a peck on the green key led either to the green houselights and food or to white houselights and no food. In some phases of the experiment, statistically significant preference for partial reinforcement over 100% reinforcement was found, but this effect was observed in only about half of the pigeons. The effect was largely eliminated when variability in the delay stimulus colors was equated for 50% reinforcement conditions and 100% reinforcement conditions. Idiosyncratic preferences for certain colors or for stimulus variability may be at least partially responsible for the effect.
Collapse
Affiliation(s)
- J E Mazur
- Psychology Department, Southern Connecticut State University, New Haven, Connecticut 06515, USA
| |
Collapse
|
148
|
Differential latency and selective nondisclosure in verbal self-reports. Anal Verbal Behav 1996; 13:49-63. [PMID: 22477110 DOI: 10.1007/bf03392906] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Several previous studies have examined the correspondence between self-reports and the delayed identity match-to-sample performance they supposedly described. The present two experiments used similar procedures to explore different characteristics of the self-reports. In both studies, match-to-sample responses were successful (earned points) if they were both correct and faster than a time limit. Following each response, a computer-presented query asked whether the response had been successful, and subjects replied by pressing a "Yes" or "No" button. Experiment 1 analyzed self-report latencies from a previously-published study (Critchfield, 1993a). Latencies generally were longer for self-reports of failure than for self-reports of success. In Experiment 1, a "Yes" or "No" self-report was required to advance the session. In Experiment 2, self-reports were optional. In addition to "Yes" and "No" buttons, subjects could press a third button (a "nondisclosure" option) to remove the self-report query without providing a "Yes" or "No" answer. Across a range of conditions, nondisclosures always occurred more frequently after match-to-sample failures than after successes (i.e., under conditions in which a self-report of failure would be appropriate). The effects observed in the two experiments are consistent with a history of differential punishment for uncomplimentary self-reports, which casual observation and some descriptive studies suggest is a common experience in United States culture. The research necessary to explore this notion should produce data that are of interest to psychologists both within and outside of Behavior Analysis.
Collapse
|
149
|
|
150
|
Abstract
Rats were trained to press a lever in the presence of a tone-light compound stimulus and not to press in its absence. In each of two experiments, schedules were designed to make the compound a conditioned punisher for one group and a conditioned reinforcer for the other. In Experiment 1, one group's responding produced food in the presence of the compound but not in its absence. The other group's responding terminated the compound stimulus, and food was presented only in its absence. When tone and light were later presented separately, light controlled more responding than did tone in the former group, but tone gained substantial control in the latter. The same effects were also observed within subjects when the training schedules were switched over groups. In Experiment 2, two groups avoided shock in the presence of the compound stimulus. In the absence of the compound, one group was not shocked, and the other received both response-independent and response-produced shock. When tone and light were presented separately, the former group's responding was mainly controlled by tone, but the latter group's responding was almost exclusively controlled by light. These effects were also observed within subjects when the training schedules were switched over groups. Thus, these single-incentive selective association effects (appetitive in Experiment 1 and aversive in Experiment 2) were completely reversible. The schedules in which the compound should have been a conditioned reinforcer consistently produced visual control, and auditory control increased when the compound should have become a conditioned punisher. Currently accepted accounts of selective associations based on affinities between shock and auditory stimuli and between food and visual stimuli (i.e., stimulus-reinforcer interactions) do not adequately address these results. The contingencies of reinforcement most recently associated with the compound and with its absence, rather than the nature of the reinforcer, determined whether auditory or visual stimulus control developed.
Collapse
Affiliation(s)
- L V Panlilio
- Department of Psychology, American University, Washington, DC 20016
| | | |
Collapse
|