1
|
Redner R, Kestner KM, Lotfizadeh A, Poling A. Punishment-induced resurgence. Behav Processes 2024; 220:105058. [PMID: 38834108 DOI: 10.1016/j.beproc.2024.105058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 04/23/2024] [Accepted: 05/29/2024] [Indexed: 06/06/2024]
Abstract
The phenomenon of extinction-induced resurgence is well established, but there is comparatively little experimental evidence for punishment-induced resurgence. Punishment-induced resurgence can by tested by contingent shocks following the alternative response. The purpose of Experiment 1 was to test whether low-intensity shocks, that do not decrease rate of reinforcement, result in resurgence. Four rats served as subjects. Rats were exposed to three sequential conditions: (a) variable-interval (VI) 30-s food delivery for a lever press (target response); (b) VI 30-s food delivery for a nose poke (alternative response) and extinction of the lever press; (c) VI 30-s reinforcement for a nose poke with superimposed VI 60-s shock delivery. In the final condition, shocks increased gradually from 0.1 to 0.5 mA. Experiment 2 evaluated whether an abrupt introduction of a high-intensity shock would result in resurgence. Three rats served as subjects and were exposed to three sequential conditions: (a) VI 30-s food delivery for a lever press; (b) VI 30-s food delivery for a nose poke and extinction of the lever press; (c) continued VI 30-s reinforcement for a nose poke with superimposed VI 60-s 0.6 mA shock delivery. Resurgence was observed in all subjects, including in situations in which rate of responding, but not rate of reinforcement, decreased. The present study provides additional evidence for punishment-induced resurgence, but future studies are warranted to determine the extent to which punishment can produce resurgence with or without decreases in rates of reinforcement.
Collapse
Affiliation(s)
- Ryan Redner
- School of Psychological and Behavioral Sciences, Southern Illinois University, Carbondale, USA.
| | | | - Amin Lotfizadeh
- Department of Psychology, California State University Northridge, USA
| | - Alan Poling
- Department of Psychology, Western Michigan University, USA
| |
Collapse
|
2
|
Macías A, Machado A, Vasconcelos M. On the value of advanced information about delayed rewards. Anim Cogn 2024; 27:10. [PMID: 38429396 PMCID: PMC10907439 DOI: 10.1007/s10071-024-01856-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/21/2023] [Accepted: 10/18/2023] [Indexed: 03/03/2024]
Abstract
In a variety of laboratory preparations, several animal species prefer signaled over unsignaled outcomes. Here we examine whether pigeons prefer options that signal the delay to reward over options that do not and how this preference changes with the ratio of the delays. We offered pigeons repeated choices between two alternatives leading to a short or a long delay to reward. For one alternative (informative), the short and long delays were reliably signaled by different stimuli (e.g., SS for short delays, SL for long delays). For the other (non-informative), the delays were not reliably signaled by the stimuli presented (S1 and S2). Across conditions, we varied the durations of the short and long delays, hence their ratio, while keeping the average delay to reward constant. Pigeons preferred the informative over the non-informative option and this preference became stronger as the ratio of the long to the short delay increased. A modified version of the Δ-Σ hypothesis (González et al., J Exp Anal Behav 113(3):591-608. https://doi.org/10.1002/jeab.595 , 2020a) incorporating a contrast-like process between the immediacies to reward signaled by each stimulus accounted well for our findings. Functionally, we argue that a preference for signaled delays hinges on the potential instrumental advantage typically conveyed by information.
Collapse
Affiliation(s)
- Alejandro Macías
- William James Center for Research, University of Aveiro, Aveiro, Portugal.
- Animal Learning and Behavior Lab, School of Psychology, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal.
| | - Armando Machado
- William James Center for Research, University of Aveiro, Aveiro, Portugal
| | - Marco Vasconcelos
- William James Center for Research, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
3
|
Morris SL, Vollmer TR, Dallery J. An evaluation of methods for studying the effects of conditioned reinforcement on human choice. J Exp Anal Behav 2023; 119:476-487. [PMID: 36726294 DOI: 10.1002/jeab.833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 01/16/2023] [Indexed: 02/03/2023]
Abstract
Shahan et al. (2006) found that the relative rate of pigeons' pecking on two observing responses (i.e., responses that only produced an S+ or stimulus correlated with primary reinforcement) was well described by the relative rate of S+ delivery. Researchers have not evaluated the effects of S+ delivery rate in a concurrent observing response procedure with human subjects, so the necessary procedural modifications for studying the effects of conditioned reinforcement on human choice remain unclear. The purpose of the current study was to conduct an additive component analysis of modifications to the procedures of Shahan et al. (2006). We evaluated the additive effects of introducing response cost, a changeover response, and ordinal discriminative stimuli on correspondence with the results of Shahan et al. and the quality of fits of the generalized matching equation. When our procedures were most similar to those of Shahan et al., we observed low rates of observing and indifference between the two observing responses. For the group of subjects with whom all three additive components were included, we obtained the highest level of sensitivity to relative rate of S+ delivery, but the slope and R2 of our fits of the generalized matching equation were still much lower than those obtained by Shahan et al. Potential reasons for these discrepancies, methods of resolving them, and implications for future research are discussed.
Collapse
Affiliation(s)
- Samuel L Morris
- Department of Psychology, Louisiana State University, Baton Rouge, LA, United States
| | - Timothy R Vollmer
- Department of Psychology, University of Florida, Gainesville, FL, United States
| | - Jesse Dallery
- Department of Psychology, University of Florida, Gainesville, FL, United States
| |
Collapse
|
4
|
Odell AJ, Greer BD, Fuhrman AM, Hardee AM. On the Efficacy of and Preference for Signaling Extinction in a Multiple Schedule. ACTA ACUST UNITED AC 2021; 26:43-61. [PMID: 34745411 DOI: 10.1037/bdb0000104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous basic research has shown that signaling the extinction component of a compound schedule can be aversive and nonpreferred. However, such discriminative stimuli are common when thinning schedules of reinforcement in practice, and they provide several advantages to clinicians. A limitation of previous applied studies on different arrangements of discriminative stimuli is that researchers have used identical stimuli to signal the availability of reinforcement across conditions that do and do not signal extinction, often doubling exposure to the stimulus signaling the availability of reinforcement. The present experiments corrected this limitation by comparing multiple-schedule arrangements that do and do not signal extinction when unique stimuli signal each component across conditions. Results from three participants indicated that both multiple-schedule arrangements were similarly efficacious when teaching the successive discrimination. However, response patterns differed when testing under a concurrent-operants arrangement, suggesting different patterns of preference across various multiple-schedule arrangements.
Collapse
Affiliation(s)
- Alicia J Odell
- University of Nebraska Medical Center's Munroe-Meyer Institute
| | - Brian D Greer
- University of Nebraska Medical Center's Munroe-Meyer Institute
| | | | | |
Collapse
|
5
|
Gomes-Ng S, Elliffe D, Cowie S. Environment tracking and signal following in a reinforcer-ratio reversal procedure. Behav Processes 2018; 157:208-224. [PMID: 30315866 DOI: 10.1016/j.beproc.2018.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 09/27/2018] [Accepted: 10/04/2018] [Indexed: 01/05/2023]
Abstract
Several studies suggest that the degree of control by reinforcer ratios (environment tracking) and by exteroceptive stimuli that signal future reinforcer availability (signal following) depends on environmental certainty: As reinforcers become more likely at one location, environmental contingencies exert stronger control and exteroceptive stimuli exert weaker control. This research has not yet been extended to environments in which reinforcer availability changes across time, even though such changes are present in most natural environments. Thus, in the present experiment, we examined environment tracking and signal following in a concurrent schedule in which the reinforcer ratio reversed to its reciprocal 30 s after a reinforcer delivery and keylight-color stimuli signaled the likely or definite time or location of the next reinforcer. Across conditions, we manipulated environmental certainty by varying the probability of reinforcer deliveries on the locally richer key. This made the location of future reinforcers at a particular time more or less certain, but did not change the overall reinforcer ratio. Changes in local environmental certainty had little to no effect on environment tracking and signal following; in all conditions, keylight-color stimuli strongly controlled choice and reinforcer ratios exerted weak control. The present findings suggest that the extent of environment tracking and signal following is primarily determined by global, not local, environmental certainty.
Collapse
Affiliation(s)
| | - Douglas Elliffe
- School of Psychology, The University of Auckland, New Zealand
| | - Sarah Cowie
- School of Psychology, The University of Auckland, New Zealand
| |
Collapse
|
6
|
Bland VJ, Cowie S, Elliffe D, Podlesnik CA. Does a negative discriminative stimulus function as a punishing consequence? J Exp Anal Behav 2018; 110:87-104. [PMID: 29926923 DOI: 10.1002/jeab.444] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Accepted: 05/09/2018] [Indexed: 11/12/2022]
Abstract
The study and use of punishment in behavioral treatments has been constrained by ethical concerns. However, there remains a need to reduce harmful behavior unable to be reduced by differential-reinforcement procedures. We investigated whether response-contingent presentation of a negative discriminative stimulus previously correlated with an absence of reinforcers would punish behavior maintained by positive reinforcers. Across four conditions, pigeons were trained to discriminate between a positive discriminative stimulus (S+) signaling the presence of food, and a negative discriminative stimulus (S-) signaling the absence of food. Once learned, every five responses on average to the S+ produced S- for a duration of 1.5 s. S+ response rate decreased for a majority of pigeons when responses produced S-, compared to when they did not, or when a neutral control stimulus was presented. In Condition 5, choice between two concurrently presented S+ alternatives shifted away from the alternative producing S-, despite a 1:1 reinforcer ratio. Therefore, presenting contingent S- stimuli punishes operant behavior maintained on simple schedules and in choice situations. Development of negative discriminative stimuli as punishers of operant behavior could provide an effective approach to behavioral treatments for problem behavior and subverting suboptimal choices involved in addictions.
Collapse
Affiliation(s)
| | | | | | - Christopher A Podlesnik
- The University of Auckland.,Florida Institute of Technology and The Scott Center for Autism Treatment
| |
Collapse
|
7
|
Abstract
Worries about the reproducibility of experiments in the behavioral and social sciences arise from evidence that many published reports contain false positive results. Misunderstanding and misuse of statistical procedures are key sources of false positives. In behavior analysis, however, statistical procedures have not been used much. Instead, the investigator must show that the behavior of an individual is consistent over time within an experimental condition, that the behavior changes systematically across conditions, and that these changes can be reproduced - and then the whole pattern must be shown in additional individuals. These high standards of within- and between-subject replication protect behavior analysis from the publication of false positive findings. When a properly designed and executed experiment fails to replicate a previously published finding, the failure exposes flaws in our understanding of the phenomenon under study - perhaps in recognizing the boundary conditions of the phenomenon, identifying the relevant variables, or bringing the variables under sufficient control. We must accept the contradictory findings as valid and pursue an experimental analysis of the possible reasons. In this way, we resolve the contradiction and advance our science. To illustrate, two research programs are described, each initiated because of a replication failure.
Collapse
Affiliation(s)
- Michael Perone
- Department of Psychology, West Virginia University, Morgantown, WV 26506-6040 USA
| |
Collapse
|
8
|
Kendall SB. Switching to Lower Density Reinforcement with Informative Stimuli. PSYCHOLOGICAL RECORD 2017. [DOI: 10.1007/bf03394555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Shahan TA, Cunningham P. Conditioned reinforcement and information theory reconsidered. J Exp Anal Behav 2016; 103:405-18. [PMID: 25766452 DOI: 10.1002/jeab.142] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 01/28/2015] [Indexed: 11/06/2022]
Abstract
The idea that stimuli might function as conditioned reinforcers because of the information they convey about primary reinforcers has a long history in the study of learning. However, formal application of information theory to conditioned reinforcement has been largely abandoned in modern theorizing because of its failures with respect to observing behavior. In this paper we show how recent advances in the application of information theory to Pavlovian conditioning offer a novel approach to conditioned reinforcement. The critical feature of this approach is that calculations of information are based on reductions of uncertainty about expected time to primary reinforcement signaled by a conditioned reinforcer. Using this approach, we show that previous failures of information theory with observing behavior can be remedied, and that the resulting framework produces predictions similar to Delay Reduction Theory in both observing-response and concurrent-chains procedures. We suggest that the similarity of these predictions might offer an analytically grounded reason for why Delay Reduction Theory has been a successful theory of conditioned reinforcement. Finally, we suggest that the approach provides a formal basis for the assertion that conditioned reinforcement results from Pavlovian conditioning and may provide an integrative approach encompassing both domains.
Collapse
|
10
|
Kendall SB. A further study of choice and percentage reinforcement. Behav Processes 2014; 10:399-413. [PMID: 24897575 DOI: 10.1016/0376-6357(85)90040-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/01/1984] [Indexed: 10/27/2022]
Abstract
In Experiment I four pigeons were trained in a concurrent chains procedure with fixed-ratio schedules (FR1) in the initial components and fixed-time schedules in the terminal components. Pecking one of the keys when both keys were white initiated a fixed time schedule on that key. A peck to the left key produced three stripes on the key. At the termination of the fixed-time component food always occurred. Pecking the other key produced either a circle or a triangle. If a circle appeared, reinforcement occurred. If a triangle appeared a brief timeout was given. Initially the stripes appeared on the left key and the circle and triangle on the left. This was reversed during the course of the experiment. In addition, sessions were conducted in which both circle and triangle sometimes preceded reinforcement and sometimes timeout. For most birds under most conditions there was a preference for the key that produced the circle and triangle. When these were uncorrelated with reinforcement and time out three of the birds preferred the key producing 100% reinforcement. In Experiment II three factors were varied and VI 20 sec schedules were used in the initial links instead of FR1. The results showed that pigeons preferred the 50% condition more 1) the greater the duration of the terminal links, 2) the smaller the value on the initial link VI schedules and 3) the less the probability of food in the terminal link with stripes on the key.
Collapse
Affiliation(s)
- S B Kendall
- Psychology Dept., University of Western Ontario, London, Ontario Canada, N6A 5C2
| |
Collapse
|
11
|
Everly JB, Holtyn AF, Perone M. Behavioral functions of stimuli signaling transitions across rich and lean schedules of reinforcement. J Exp Anal Behav 2014; 101:201-14. [DOI: 10.1002/jeab.74] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Accepted: 12/16/2013] [Indexed: 11/11/2022]
|
12
|
Fantino E. Judgment and decision making: Behavioral approaches. THE BEHAVIOR ANALYST 2012; 21:203-18. [PMID: 22478308 DOI: 10.1007/bf03391964] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The area of judgment and decision making has given rise to the study of many interesting phenomena, including reasoning fallacies, which are also of interest to behavior analysts. Indeed, techniques and principles of behavior analysis may be applied to study these fallacies. This article reviews research from a behavioral perspective that suggests that humans are not the information-seekers we sometimes suppose ourselves to be. Nor do we utilize information effectively when it is presented. This is shown from the results of research utilizing matching to sample and other behavioral tools (monetary reward, feedback, instructional control) to study phenomena such as the conjunction fallacy, base-rate neglect, and probability matching. Research from a behavioral perspective can complement research from other perspectives in furthering our understanding of judgment and decision making.
Collapse
|
13
|
Abstract
Psychologists have long been intrigued with the rationales that underlie our decisions. Similarly, the concept of conditioned reinforcement has a venerable history, particularly in accounting for behavior not obviously maintained by primary reinforcers. The studies of choice and of conditioned reinforcement have often developed in lockstep. Many contemporary approaches to these fundamental topics share an emphasis on context and on relative value. We trace the evolution of thinking about the potency of conditioned reinforcers from stimuli that were thought to acquire their value from pairings with more fundamental reinforcers to stimuli that acquire their value by being differentially correlated with these more fundamental reinforcers. We discuss some seminal experiments (including several that have been underappreciated) and some ongoing data, all of which have propelled us to the conclusion that the strength of conditioned reinforcers is determined by their signaling a relative improvement in the organism's relation to reinforcement.
Collapse
|
14
|
Escobar R, Bruner CA. Observing responses and serial stimuli: searching for the reinforcing properties of the S-. J Exp Anal Behav 2010; 92:215-31. [PMID: 20354600 DOI: 10.1901/jeab.2009.92-215] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2007] [Accepted: 05/01/2009] [Indexed: 10/19/2022]
Abstract
The control exerted by a stimulus associated with an extinction component (S-) on observing responses was determined as a function of its temporal relation with the onset of the reinforcement component. Lever pressing by rats was reinforced on a mixed random-interval extinction schedule. Each press on a second lever produced stimuli associated with the component of the schedule in effect. In Experiment 1 a response-dependent clock procedure that incorporated different stimuli associated with an extinction component of a variable duration was used. When a single S- was presented throughout the extinction component, the rate of observing remained relatively constant across this component. In the response-dependent clock procedure, observing responses increased from the beginning to the end of the extinction component. This result was replicated in Experiment 2, using a similar clock procedure but keeping the number of stimuli per extinction component constant. We conclude that the S- can function as a conditioned reinforcer, a neutral stimulus or as an aversive stimulus, depending on its temporal location within the extinction component.
Collapse
|
15
|
Nevin JA, Smith LD, Roberts J. Does contingent reinforcement strengthen operant behavior? J Exp Anal Behav 2010; 48:17-33. [PMID: 16812487 PMCID: PMC1338742 DOI: 10.1901/jeab.1987.48-17] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
In Experiment 1, pigeons were trained to peck keys with equal food-reinforcement schedules in components that ended with either noncontingent or contingent transitions to a third component with a five-fold richer schedule. Response rates were higher in the initial component with contingent transitions, but resistance to prefeeding or extinction was not consistently greater. Experiment 2 also included noncontingent or contingent transitions to a signaled period of nonreinforcement. There was no effect of the contingency on transitions to nonreinforcement, but the difference in response rates maintained by contingent versus noncontingent transitions to the richer schedule was replicated. In addition, response rates were higher in components that preceded nonreinforcement than in components that preceded the richer schedule. However, resistance to extinction was greater for noncontingent transitions to the richer schedule than to nonreinforcement, implicating stimulus-reinforcer relations in the determination of resistance to change. Resistance to change was also somewhat greater for noncontingent than for contingent transitions to the richer schedule. The latter result, together with the results of Experiment 1 and related research, suggests that response-contingent reinforcement does not increase resistance to change.
Collapse
|
16
|
Harsh J, Badia P. A temporal parameter influencing choice between signalled and unsignalled shock schedules. J Exp Anal Behav 2010; 25:327-33. [PMID: 16811916 PMCID: PMC1333471 DOI: 10.1901/jeab.1976.25-327] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
The present study investigated whether choice of a signalled variable-time shock schedule over an unsignalled one was influenced by the average intershock interval. Eight rats were given a choice between signalled and unsignalled shock schedules in a series of conditions with average intershock intervals of 510, 270, 150, 90, 60, and 45 sec. Each test condition was preceded by a training-baseline condition, and schedule values were arranged in an ascending (four subjects) or descending (four subjects) order. Choice of the signalled conditions was directly related to the average intershock interval of the variable-time schedule for six of the eight subjects. The per cent of time in the signalled condition was highest when the average intershock interval was 150 sec or longer and lowest when the average intershock interval was 45 sec. The findings were interpreted as being due to changes in the safety features of the signalled schedule, rather than to changes in the average intershock interval per se.
Collapse
|
17
|
Case DA, Fantino E. The delay-reduction hypothesis of conditioned reinforcement and punishment: Observing behavior. J Exp Anal Behav 2010; 35:93-108. [PMID: 16812203 PMCID: PMC1333025 DOI: 10.1901/jeab.1981.35-93] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Pigeons responded in an observing-response procedure in which three fixed-interval components alternated. Pecking one response key produced food reinforcement according to a mixed schedule. Pecking the second (observing) key occasionally replaced the mixed-schedule stimulus with the stimulus correlated with the fixed-interval component then in effect. In Experiment 1, observing was best maintained by stimuli correlated with a reduction in mean time to reinforcement. That finding was consistent with the conditioned-reinforcement hypothesis of observing behavior. However, low rates of observing were also maintained by stimuli not representing delay reduction. Experiment 2 assessed the role of sensory reinforcement. It showed that response rate was higher when maintained by stimuli uncorrelated with reinforcement delay than when the stimuli were correlated with a delay increase. This latter result supports a symmetrical version of the conditioned-reinforcement hypothesis that requires suppression by stimuli correlated with an increase in time to reinforcement. The results were inconsistent with hypotheses stressing the reinforcing potency of uncertainty reduction.
Collapse
|
18
|
Green L, Rachlin H. Pigeons' preferences for stimulus information: effects of amount of information. J Exp Anal Behav 2010; 27:255-63. [PMID: 16811988 PMCID: PMC1333589 DOI: 10.1901/jeab.1977.27-255] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
A concurrent-chain procedure was used to study pigeons' preferences as a function of amount of information. Pigeons chose between two terminal links. Both terminal links ended in food reinforcement with probability (p) and in blackout with probability (1-p). One terminal link (noninformative link) was signalled by a stimulus uncorrelated with either food or blackout. The other terminal link (informative link) was signalled by stimuli correlated with these outcomes. Amount of information conveyed by these stimuli was varied across conditions by changing the probability of reinforcement (p) and blackout (1-p). The pigeons strongly preferred the informative link, and preferences were greater at p values above 0.50 than for their complements. The pigeons engaged in different behaviors during the stimulus periods, suggesting that the value of informative stimuli may be in their function as discriminative stimuli for interim activities and terminal responses.
Collapse
|
19
|
Abstract
In a concurrent-chains procedure, pigeons chose between equivalent mixed and multiple fixed-interval schedules of reinforcement. In the first experiment, preference for the multiple schedule was higher when the probability of the shorter fixed interval was less than .50 than for complementary points, an outcome consistent with the delay-reduction hypothesis of conditioned reinforcement and observing, but inconsistent with the uncertainty-reduction hypothesis which requires symmetrical preferences with a maximum when the two intervals are equiprobable. A second experiment assessed preference for equivalent mixed and multiple schedules when each choice outcome resulted in two reinforcements, one on the longer and one on the shorter fixed interval. The order of the two fixed intervals was determined probabilistically. Pigeons again preferred multiple to mixed schedules, although multiple-schedule preference did not vary systematically with the likelihood of the shorter fixed interval occurring first. The results from these choice procedures are consistent with those from the observing-response literature in suggesting that the strength of a stimulus cannot be well described as a function of the degree of uncertainty reduction the stimulus provides about reinforcement.
Collapse
|
20
|
Perone M, Baron A. Reinforcement of human observing behavior by a stimulue correlated with extinction or increased effort. J Exp Anal Behav 2010; 34:239-61. [PMID: 16812189 PMCID: PMC1333004 DOI: 10.1901/jeab.1980.34-239] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Young men pulled a plunger on mixed and multiple schedules in which periods of variable-interval monetary reinforcement alternated irregularly with periods of extinction (Experiment 1), or in which reinforcement was contingent on different degrees of effort in the two alternating components (Experiment 2). In the baseline conditions, the pair of stimuli correlated with the schedule components could be obtained intermittently by pressing either of two observing keys. In the main conditions, pressing one of the keys continued to produce both discriminative stimuli as appropriate. Pressing the other key produced only the stimulus correlated with variable-interval reinforcement or reduced effort; presses on this key were ineffective during periods of extinction or increased effort. In both experiments, key presses producing both stimuli occurred at higher rates than key presses producing only one, demonstrating enhancement of observing behavior by a stimulus correlated with the less favorable of two contingencies. A control experiment showed that stimulus change alone was not an important factor in the maintenance of the behavior. These findings suggest that negative as well as positive stimuli may play a role in the conditioned reinforcement of human behavior.
Collapse
|
21
|
Perone M, Kaminski BJ. Conditioned reinforcement of human observing behavior by descriptive and arbitrary verbal stimuli. J Exp Anal Behav 2010; 58:557-75. [PMID: 16812679 PMCID: PMC1322102 DOI: 10.1901/jeab.1992.58-557] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
College students earned monetary reinforcers by pressing a key according to a compound schedule with variable-interval and extinction components. Pressing additional keys occasionally produced displays of either of two verbal stimuli; one was uncorrelated with the schedule components, and the other was correlated with the extinction component. In Experiments 1 and 2, the display area of the apparatus was blank unless an observing key was pressed, whereupon a descriptive message appeared. Most students preferred an uncorrelated stimulus stating that "Some of this time scores are TWICE AS LIKELY as normal, and some of this time NO SCORES can be earned" over a stimulus stating that "At this time NO SCORES can be earned." In Experiment 3, the display area indicated that "The Current Status of the Program is: NOT SHOWN." Presses on the observing keys replaced this message with stimuli that provided arbitrary labels for the schedule conditions. All of the students preferred a stimulus stating that "The Current Status of the Program is: B" over an uncorrelated stimulus stating that "The Current Status of the Program is: either A or B." Thus, under some circumstances, observing was maintained by a stimulus correlated with extinction-a finding that poses a challenge for Pavolvian accounts of conditioned reinforcement. Differences in the maintenance of observing by the descriptive and arbitrary stimuli may be attributed to differences in either the strength or nature of the instructional control exerted by the verbal stimuli.
Collapse
|
22
|
Catania AC. Freedom and knowledge: an experimental analysis of preference in pigeons. J Exp Anal Behav 2010; 24:89-106. [PMID: 16811866 PMCID: PMC1333385 DOI: 10.1901/jeab.1975.24-89] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Relative responding in initial links of concurrent-chain schedules showed that pigeons preferred free to forced choices and informative to uninformative stimuli. Variable-interval initial links on two lower keys (white) of a six-key chamber produced terminal links on either two upper-left keys (blue and/or amber) or two upper-right keys (green and/or red). Terminal.links in which pecks on either of two lit keys produced fixed-interval reinforcement (free choice) were preferred to links with only one lit fixed-interval key available (forced choice). Terminal links with different key colors correlated with concurrent fixed-interval reinforcement and extinction (informative stimuli) were preferred to links with these schedules operating on same-color keys (uninformative stimuli). Scheduling extinction for one of the two free-choice keys assessed preference for two lit keys over one lit key, but confounded number with whether stimuli were informative. Fixed-interval reinforcement for both keys in each terminal link, but with different-color keys in one link and same-color keys in the other, showed that preference for informative stimuli did not depend on stimulus variety. Preferences were independent of relative responses per reinforcement and other properties of terminal-link performance.
Collapse
|
23
|
Dinsmoor JA, Mulvaney DE, Jwaideh AR. Conditioned reinforcement as a function of duration of stimulus. J Exp Anal Behav 2010; 36:41-9. [PMID: 16812230 PMCID: PMC1333051 DOI: 10.1901/jeab.1981.36-41] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Pigeons were provided with three keys. Pecking the center key produced grain on a schedule that alternated at unpredictable times between a variable-interval component and extinction. On concurrent variable-interval schedules, pecking either side key produced a stimulus associated with the variable-interval component on the center key provided that said schedule was currently in effect. The independent variable was the length of time this stimulus remained on the keys. Pecking one side key produced the stimulus for 27 seconds, whereas the duration produced by pecking the other key varied for successive blocks of sessions. For the first four birds, the values tested were 3, 9, 27, and 81 seconds. For the second group, numbering three birds, the values tested were 1, 3, 9, and 27 seconds. The dependent variable was the proportion of total side key pecks that occurred on the variable key. For all birds, the function was positive in slope and negative in acceleration. This finding supports a formulation that ascribes the maintenance of observing responses in a normal setting to the fact that the subject exposes itself to the positive discriminative stimulus for a longer mean duration than it does to the negative stimulus.
Collapse
|
24
|
Fantino E, Case DA. Human observing: Maintained by stimuli correlated with reinforcement but not extinction. J Exp Anal Behav 2010; 40:193-210. [PMID: 16812343 PMCID: PMC1347908 DOI: 10.1901/jeab.1983.40-193] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
College students received points exchangeable for money (reinforcement) on a variable-time 60-second schedule that alternated randomly with an extinction component. Subjects were informed that responding would not influence either the rate or distribution of reinforcement. Instead, presses on either of two levers ("observing responses") produced stimuli. In each of four experiments, stimuli positively correlated with reinforcement and/or stimuli uncorrelated with reinforcement were each chosen over stimuli correlated with extinction. These results are consistent with prior results from pigeons in supporting the conditioned-reinforcement hypothesis of observing and in not supporting the uncertainty-reduction hypothesis.
Collapse
|
25
|
Abstract
Pigeons made observing responses for stimuli signalling either a fixed-interval 30-sec schedule or a fixed-ratio x schedule, where x was either 20, 30, 100, 140, or 200 and the schedules alternated at random after reinforcement. If observing responses did not occur, food-producing responses occurred to a stimulus common to both reinforcement schedules. When the fixed-interval schedule was paired with a low-value fixed ratio, i.e., 20 or 30, the presentation of the stimulus reliably signalling the fixed-ratio schedule reinforced observing behavior, but the presentation of the stimulus reliably signalling the fixed-interval schedule did not. The converse was the case when the fixed-interval schedule was paired with a large-valued fixed ratio, i.e., 100, 140, or 200. The results demonstrated that the occasional presentation of the stimulus signalling the shorter interreinforcement interval was necessary for the maintenance of observing behavior. The reinforcement relationship was a function of the schedule context and was reversed by changing the context. Taken together, the results show that the establishment and measurement of conditioned reinforcement is dependent upon the context or environment in which stimuli reliably correlated with differential events occur.
Collapse
|
26
|
Dinsmoor JA, Bowe CA, Dout DL, Martin LT, Mueller KL, Workman JD. Separating the effects of salience and disparity on the rate of observing. J Exp Anal Behav 2010; 40:253-64. [PMID: 16812348 PMCID: PMC1347936 DOI: 10.1901/jeab.1983.40-253] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Pigeons producing deliveries of grain on a mixed variable-interval, extinction schedule by pecking a center key could also produce discriminative stimuli on concurrent variable-interval schedules by pecking the left or right observing key. The stimuli produced by each observing key were varied independently. In the first experiment, the negative discriminative stimulus was at the far end of the spectrum from the key illumination accompanying the mixed schedule and from the positive discriminative stimulus. When the magnitude of the difference between the latter two stimuli (salience) was varied, more pecks occurred on the observing key producing the larger of the two differences than on the key producing the smaller difference. In the second experiment, the stimulus accompanying the mixed schedule was at the far end of the spectrum, and the magnitude of the difference between the two discriminative stimuli (disparity) was varied. The proportion of pecks occurring on each observing key shifted systematically in the direction of the key producing the larger difference. The salience of the discriminative stimuli and their disparity each has an independent influence on the frequency of observing when the other is controlled, but the effect of the salience appears to be the more substantial.
Collapse
|
27
|
Mueller KL, Dinsmoor JA. The effect of negative stimulus presentations on observing-response rates. J Exp Anal Behav 2010; 46:281-91. [PMID: 16812463 PMCID: PMC1348267 DOI: 10.1901/jeab.1986.46-281] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Theories of observing differ in predicting whether or not a signal for absence of reinforcement (S-) is capable of reinforcing observing responses. Experiments in which S- was first removed from and then restored to the procedure have yielded mixed results. The present experiments suggest that failure to control for the direct effect of presenting S- may have been responsible. Pigeons and operant procedures were used. Experiment 1 showed that presentations of S-, even when not contingent on observing, can raise the rate of an observing response that was reinforced only by presentations of a signal (S+) that accompanied a schedule of food delivery. Experiment 2 showed that this effect resulted from bursts of responding that followed offsets of S-. Experiment 3 showed that, when the presence of S- was held constant, lower rates occurred when S- was dependent on, rather than independent of, observing. These results support theories that characterize S- as incapable of reinforcing observing responses.
Collapse
|
28
|
Shahan TA. Conditioned reinforcement and response strength. J Exp Anal Behav 2010; 93:269-89. [PMID: 20885815 PMCID: PMC2831656 DOI: 10.1901/jeab.2010.93-269] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2009] [Accepted: 11/06/2009] [Indexed: 10/19/2022]
Abstract
Stimuli associated with primary reinforcers appear themselves to acquire the capacity to strengthen behavior. This paper reviews research on the strengthening effects of conditioned reinforcers within the context of contemporary quantitative choice theories and behavioral momentum theory. Based partially on the finding that variations in parameters of conditioned reinforcement appear not to affect response strength as measured by resistance to change, long-standing assertions that conditioned reinforcers do not strengthen behavior in a reinforcement-like fashion are considered. A signposts or means-to-an-end account is explored and appears to provide a plausible alternative interpretation of the effects of stimuli associated with primary reinforcers. Related suggestions that primary reinforcers also might not have their effects via a strengthening process are explored and found to be worthy of serious consideration.
Collapse
Affiliation(s)
- Timothy A Shahan
- Department of Psychology, 2810 Old Main Hill, Utah State University, Logan, UT 84322, USA.
| |
Collapse
|
29
|
|
30
|
The multiple determinants of observing behavior. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00018045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
31
|
Explaining classical conditioning: Phenomenological unity conceals mechanistic diversity. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00024638] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
|
33
|
|
34
|
Abstract
AbstractConverging data from different disciplines are showing the role of classical conditioning processes in the elaboration of human and animal behavior to be larger than previously supposed. Restricted views of classically conditioned responses as merely secretory, reflexive, or emotional are giving way to a broader conception that includes problem-solving, and other rule-governed behavior thought to be the exclusive province of either operant conditiońing or cognitive psychology. These new views have been accompanied by changes in the way conditioning is conducted and evaluated. Data from a number of seemingly unrelated phenomena such as relapse to drug abuse by postaddicts, the placebo effect, and the immune response appear to involve classical conditioning processes. Classical conditioning, moreover, has been found to occur in simpler and simpler organisms and recently even demonstrated in brain slices and in utero. This target article will integrate the several research areas that have used the classical conditioning process as an explanatory model; it will challenge teleological interpretations of the classically conditioned CR and offer some basic principles for testing conditioning in diverse areas.
Collapse
|
35
|
Flights of teleological fancy about classical conditioning do not produce valid science or useful technology. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x0002464x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
36
|
|
37
|
|
38
|
|
39
|
Secondary reinforcement: Still alive? Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00018033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
40
|
Conditioning of sexual and reproductive behavior: Extending the hegemony to the propagation of species. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00024602] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
41
|
|
42
|
|
43
|
|
44
|
|
45
|
|
46
|
|
47
|
|
48
|
|
49
|
Some more information on observing and some more observations on information. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00057976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
50
|
|