1
|
Bodeker RRH, Grace RC. Effects of methamphetamine on probability discounting in rats using concurrent chains. Behav Processes 2024; 214:104971. [PMID: 38000519 DOI: 10.1016/j.beproc.2023.104971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 10/15/2023] [Accepted: 11/20/2023] [Indexed: 11/26/2023]
Abstract
How stimulant drugs affect risky choice and the role of reinforcement magnitude has been an important question for research on impulsivity. This study investigated rats' responding on a rapid acquisition, concurrent chains, probability discounting task under methamphetamine administration. In each block of four sessions, probability of reinforcement delivery was unequal (0.5/1.0, 1.0/0.5) or equal, (1.0/1.0, 0.5/0.5) while amount of reinforcement was constant and unequal. This allowed for an estimate of probability discounting and the magnitude effect (where larger reinforcers are discounted at a greater rate) in each block. Baseline, acute and chronic methamphetamine administration, and re-establish baseline phases were completed. Rats showed sensitivity to probability and magnitude in baseline, as well as a magnitude effect whereby preference for the larger reinforcement was greater with 100% than 50% reinforcement probability. Acute methamphetamine dose-dependently reduced the probability effect. There were no effects of chronic administration and only probability discounting was maintained in the re-establish baseline phase. This was the first procedure to find a magnitude effect with rats in a probability discounting procedure and demonstrates that acute methamphetamine reduces both the probability and magnitude effects which increases propensity for risky choice.
Collapse
|
2
|
Blejewski RC, Van Heukelom JT, Langford JS, Hunt KH, Rinkert IR, Wagner TJ, Pitts RC, Hughes CE. Behavioral mechanisms of oxycodone's effects in female and male rats: Reinforcement delay and impulsive choice. Exp Clin Psychopharmacol 2023; 31:1050-1068. [PMID: 37199913 PMCID: PMC10656366 DOI: 10.1037/pha0000646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
μ-Opioid agonists (e.g., morphine) typically increase impulsive choice, which has been interpreted as an opioid-induced increase in sensitivity to reinforcement delay. Relatively little research has been done with opioids other than morphine (e.g., oxycodone), or on sex differences in opioid effects, on impulsive choice. The present study investigated the effects of acute (0.1-1.0 mg/kg) and chronic (1.0 mg/kg twice/day) administration of oxycodone on choice controlled by reinforcement delay, a primary mechanism implicated in impulsive choice, in female and male rats. Rats responded under a concurrent-chains procedure designed to quantify the effects of reinforcement delay on choice within each session. For both sexes, choice was sensitive to delay under this procedure. Sensitivity to delay under baseline was slightly higher for males than females, suggesting more impulsive choice with males. When given acutely, intermediate and higher doses of oxycodone decreased sensitivity to delay; this effect was larger and more reliable in males than females. When given chronically, sex differences were also observed: tolerance developed to the sensitivity-decreasing effects in females, whereas sensitization developed in males. These data suggest that reinforcement delay may play an important role in sex differences in impulsive choice, as well as in the effects of acute and chronic administration of opioids in impulsive choice. However, drug-induced changes in impulsive choice could be related to at least two potential behavioral mechanisms: reinforcement delay and/or reinforcement magnitude. Effects of oxycodone on sensitivity to reinforcement magnitude remain to be fully characterized. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Jeremy S. Langford
- Department of Psychology, University of North Carolina Wilmington
- Department of Psychology, West Virginia University
| | - Katelyn H. Hunt
- Department of Psychology, University of North Carolina Wilmington
| | | | - Thomas J. Wagner
- Department of Psychology, University of North Carolina Wilmington
| | - Raymond C. Pitts
- Department of Psychology, University of North Carolina Wilmington
| | | |
Collapse
|
3
|
Hughes CE, Langford JS, Van Heukelom JT, Blejewski RC, Pitts RC. A method for studying reinforcement factors controlling impulsive choice for use in behavioral neuroscience. J Exp Anal Behav 2022; 117:363-383. [PMID: 35506355 DOI: 10.1002/jeab.751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/04/2022] [Accepted: 02/09/2022] [Indexed: 02/01/2023]
Abstract
Although procedures originating within the experimental analysis of behavior commonly are used in behavioral neuroscience to produce behavioral endpoints, they are used less often to analyze the behavioral processes involved, particularly at the level of individual organisms (see Soto, 2020). Concurrent-chains procedures have been used extensively to study choice and to quantify relations between various dimensions of reinforcement and preference. Unfortunately, parametric analysis of those relations using traditional steady-state, single-subject experimental designs can be time-consuming, often rendering these procedures impractical for use in behavioral neuroscience. The purpose of this paper is to describe how concurrent-chains procedures can be adapted to allow for parametric examination of effects of the reinforcement dimensions involved in impulsive choice (magnitude and delay) within experimental sessions in rats. Data are presented indicating that this procedure can produce relatively consistent within-session estimates of sensitivity to reinforcement in individual subjects, and that these estimates can be modified by neurobiological manipulation (drug administration). These data suggest that this type of procedure offers a promising approach to the study of neurobiological mechanisms of complex behavior in individual organisms, which could facilitate a more fruitful relationship between behavior analysis and behavioral neuroscience.
Collapse
|
4
|
Yates JR, Ellis AL, Evans KE, Kappesser JL, Lilly KM, Mbambu P, Sutphin TG. Pair housing, but not using a controlled reinforcer frequency procedure, attenuates the modulatory effect of probability presentation order on amphetamine-induced changes in risky choice. Behav Brain Res 2020; 390:112669. [PMID: 32417278 DOI: 10.1016/j.bbr.2020.112669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/09/2020] [Accepted: 04/21/2020] [Indexed: 10/24/2022]
Abstract
Probability discounting is often measured with independent schedules. Independent schedules have several limitations, such as confounding preference for one alternative with frequency of reward presentation and generating ceiling/floor effects at certain probabilities. To address this potential caveat, a controlled reinforcer frequency schedule can be used, in which the manipulandum that leads to reinforcement is pseudo-randomly determined before each trial. This schedule ensures subjects receive equal presentations of the small and large magnitude reinforcers across each block of trials. A total of 24 pair-housed and 11 individually housed female Sprague Dawley rats were tested in a controlled reinforcer frequency procedure. For half of the rats, the odds against (OA) receiving the large magnitude reinforcer increased across the session (ascending schedule); the OA decreased across the session for half of the rats (descending schedule). Following training, rats received treatments of amphetamine (AMPH; 0, 0.25, 0.5, 1.0 mg/kg; s.c.). For pair-housed rats, AMPH (0.5 mg/kg) increased risky choice, regardless of probability presentation order, whereas a higher dose of AMPH (1.0 mg/kg) decreased discriminability of reinforcer magnitude for rats trained on the descending schedule only. For individually housed rats, probability presentation order modulated the effects of AMPH on probability discounting, as AMPH (0.25 and 0.5 mg/kg) increased risky choice in rats trained on the ascending schedule but not on the descending schedule. These results show that pair-housing animals, but not using a controlled reinforcer frequency procedure, attenuates the modulatory effects of probability presentation order on drug effects on risky choice.
Collapse
Affiliation(s)
- Justin R Yates
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA.
| | - Alexis L Ellis
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| | - Karson E Evans
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| | - Joy L Kappesser
- Department of Biological Sciences, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| | - Kadyn M Lilly
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| | - Prodiges Mbambu
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| | - Tanner G Sutphin
- Department of Psychological Science, Northern Kentucky University, 1 Nunn Drive, Highland Heights, KY, 41099, USA
| |
Collapse
|
5
|
Yates JR, Prior NA, Chitwood MR, Day HA, Heidel JR, Hopkins SE, Muncie BT, Paradella-Bradley TA, Sestito AP, Vecchiola AN, Wells EE. Using a dependent schedule to measure risky choice in male rats: Effects of d-amphetamine, methylphenidate, and methamphetamine. Exp Clin Psychopharmacol 2020; 28:181-195. [PMID: 31120280 PMCID: PMC7317298 DOI: 10.1037/pha0000300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Risky choice is the tendency to choose a large, uncertain reward over a small, certain reward, and is typically measured with probability discounting, in which the probability of obtaining the large reinforcer decreases across blocks of trials. One caveat to traditional procedures is that independent schedules are used, in which subjects can show exclusive preference for one alternative relative to the other. For example, some rats show exclusive preference for the small, certain reinforcer as soon as delivery of the large reinforcer becomes probabilistic. Therefore, determining if a drug increases risk aversion (i.e., decreases responding for the probabilistic alternative) is difficult (due to floor effects). The overall goal of this experiment was to use a concurrent-chains procedure that incorporated a dependent schedule during the initial link, thus preventing animals from showing exclusive preference for one alternative relative to the other. To determine how pharmacological manipulations alter performance in this task, male Sprague-Dawley rats (n = 8) received injections of amphetamine (0, 0.25, 0.5, 1.0 mg/kg), methylphenidate (0, 0.3, 1.0, 3.0 mg/kg), and methamphetamine (0, 0.5, 1.0, 2.0 mg/kg). Amphetamine (0.25 mg/kg) and methylphenidate (3.0 mg/kg) selectively increased risky choice, whereas higher doses of amphetamine (0.5 and 1.0 kg/mg) and each dose of methamphetamine impaired stimulus control (i.e., flattened the discounting function). These results show that dependent schedules can be used to measure risk-taking behavior and that psychostimulants promote suboptimal choice when this schedule is used. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | - Emily E Wells
- Department of Psychological and Brain Sciences, University of Louisville
| |
Collapse
|
6
|
Pigeons play the percentages: computation of probability in a bird. Anim Cogn 2018; 21:575-581. [PMID: 29797110 DOI: 10.1007/s10071-018-1192-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 05/15/2018] [Accepted: 05/17/2018] [Indexed: 10/16/2022]
Abstract
The ability to compute probability, previously shown in nonverbal infants, apes, and monkeys, was examined in three experiments with pigeons. After responding to individually presented keys in an operant chamber that delivered reinforcement with varying probabilities, pigeons chose between these keys on probe trials. Pigeons strongly preferred a 75% reinforced key over a 25% reinforced key, even when the total number of reinforcers obtained on each key was equated. When both keys delivered 50% reinforcement, pigeons showed indifference between them, even though three times more reinforcers were obtained on one key than on the other. It is suggested that computation of probability may be common to many classes of animals and may be driven by the need to forage successfully for nutritional food items, mates, and areas with a low density of predators.
Collapse
|
7
|
Adaptive learning and forgetting in an unconventional experimental routine. Anim Cogn 2018; 21:315-329. [PMID: 29442251 DOI: 10.1007/s10071-018-1168-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/06/2018] [Accepted: 02/10/2018] [Indexed: 10/18/2022]
Abstract
Forgetting is often thought of as the inability to remember, but remembering and forgetting allow behavior to adapt to a changing environment in distinct and separable ways. Learning and forgetting were assessed concurrently in two pigeon experiments that involved the same unconventional routine where the schedule of reinforcement changed every session. Sessions were run back-to-back with a 23-h mid-session break such that in a single visit to the testing chamber, a pigeon completed the second half of one session and the first half of the next. The beginning of a new session was either signaled or unsignaled. Experiment 1 involved concurrent variable-interval variable-interval schedules with four possible reinforcer ratios. Response allocation was sensitive to the richer schedule and was retained through the mid-session break. Experiment 2 involved peak interval schedules of varying durations. Temporal discrimination was rapidly acquired before and after the mid-session break, but not retained. Signaling the session change decreased control by past contingencies in both experiments, demonstrating that learning and forgetting can be investigated separately. These results suggest that the temporal structure of training, such as multiple short daily sessions instead of one long session, can meaningfully impact measurement of animals' capacity to forget and remember.
Collapse
|
8
|
Grace RC. Reprint of "Acquisition of choice in concurrent chains: Assessing the cumulative decision model". Behav Processes 2016; 127:74-85. [PMID: 27150444 DOI: 10.1016/j.beproc.2016.04.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Revised: 03/14/2016] [Accepted: 03/17/2016] [Indexed: 11/27/2022]
Abstract
Concurrent chains is widely used to study pigeons' choice between terminal links that can vary in delay, magnitude, or probability of reinforcement. We review research on the acquisition of choice in this procedure. Acquisition has been studied with a variety of research designs, and some studies have incorporated no-food trials to allow for timing and choice to be observed concurrently. Results show that: Choice can be acquired rapidly within sessions when terminal links change unpredictably; under steady-state conditions, acquisition depends on both initial- and terminal-link schedules; and initial-link responding is mediated by learning about the terminal-link stimulus-reinforcer relations. The cumulative decision model (CDM) proposed by Christensen and Grace (2010) and Grace and McLean (2006, 2015) provides a good description of within-session acquisition, and correctly predicts the effects of initial and terminal-link schedules in steady-state designs (Grace, 2002a). Questions for future research include how abrupt shifts in preference within individual sessions and temporal control of terminal-link responding can be modeled.
Collapse
|
9
|
Grace RC. Acquisition of choice in concurrent chains: Assessing the cumulative decision model. Behav Processes 2016; 126:82-93. [DOI: 10.1016/j.beproc.2016.03.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Revised: 03/14/2016] [Accepted: 03/17/2016] [Indexed: 11/29/2022]
|
10
|
Subramaniam S, Kyonka EG. Environmental dynamics modulate covariation of choice and timing. Behav Processes 2016; 124:130-40. [DOI: 10.1016/j.beproc.2016.01.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 01/13/2016] [Accepted: 01/17/2016] [Indexed: 11/25/2022]
|
11
|
Killeen PR. The logistics of choice. J Exp Anal Behav 2015; 104:74-92. [DOI: 10.1002/jeab.156] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 04/20/2015] [Indexed: 11/09/2022]
|
12
|
Pope DA, Newland MC, Hutsell BA. Delay-specific stimuli and genotype interact to determine temporal discounting in a rapid-acquisition procedure. J Exp Anal Behav 2015; 103:450-71. [PMID: 25869302 DOI: 10.1002/jeab.148] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Accepted: 03/02/2015] [Indexed: 02/02/2023]
Abstract
The importance of delay discounting to many socially important behavior problems has stimulated investigations of biological and environmental mechanisms responsible for variations in the form of the discount function. The extant experimental research, however, has yielded disparate results, raising important questions regarding Gene X Environment interactions. The present study determined the influence of stimuli that uniquely signal delays to reinforcement on delay discounting in two inbred mouse strains using a rapid-acquisition procedure. BALB/c and C57BL/6 mice responded under a six-component, concurrent-chained schedule in which the terminal-link delays preceding the larger-reinforcer were presented randomly across components of an individual session. Across conditions, components were presented either with or without delay-specific auditory stimuli, i.e., as multiple or mixed schedules. A generalized matching-based model was used to incorporate the impact of current and previous component reinforcer-delay ratios on current component response allocation. Sensitivity to reinforcer magnitude and delay were higher for BALB/c mice, but within-component preference reached final levels faster for C57Bl/6 mice. For BALB/c mice, acquisition of preference across blocks of a component was faster under the multiple than the mixed schedule, but final levels of sensitivity to reinforcement were unaffected by schedule. The speed of acquisition of preference was not different across schedules for C57Bl/6 mice, but sensitivity to reinforcement was higher under the multiple than the mixed schedule. Overall, differences in the acquisition and final form of the discount function were determined by a Gene X Environment interaction, but the presence of delay-specific stimuli attenuated genotype-dependent differences in magnitude and delay sensitivity.
Collapse
|
13
|
|
14
|
Kyonka EG. Quantifying transitions in response allocation with change point analysis in concurrent chains. Behav Processes 2014; 104:91-8. [DOI: 10.1016/j.beproc.2014.02.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Revised: 02/11/2014] [Accepted: 02/14/2014] [Indexed: 10/25/2022]
|
15
|
Pitts RC. Reconsidering the concept of behavioral mechanisms of drug action. J Exp Anal Behav 2014; 101:422-41. [PMID: 24585427 DOI: 10.1002/jeab.80] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 01/28/2014] [Indexed: 11/11/2022]
Abstract
A half-century of research in behavioral pharmacology leaves little doubt that behavior-environment contingencies can determine the behavioral effects of drugs. Unfortunately, a coherent behavior-analytic framework within which to characterize the myriad ways in which contingencies interact with drugs, and to predict effects of a given drug under a given set of conditions, still has not developed. Some behavioral pharmacologists have suggested the concept of behavioral mechanisms of drug action as a foundation for such a framework. The notion of behavioral mechanisms, however, does not seem to have been fully embraced by behavioral pharmacologists. It is suggested here that one reason for this is that the concept itself has not been sufficiently clarified (i.e., stimulus control over use of the phrase is not sufficiently precise). Furthermore, early behavioral pharmacologists may not have possessed an adequate set of analytic tools to develop a viable framework based upon behavior mechanisms. In the first part of this paper, the notion of behavioral mechanisms of drug action is explored, and the sort of data that might provide evidence of a behavioral mechanism is considered. In the second part, it is suggested that the increased availability of quantitative models in behavior analysis may help provide the tools needed for elucidating behavioral mechanisms of drug action. Some examples of how these models have been, and could be used are provided.
Collapse
|
16
|
Hutsell BA, Jacobs EA. Rapid acquisition of bias in signal detection: dynamics of effective reinforcement allocation. J Exp Anal Behav 2012; 97:29-49. [PMID: 22287803 DOI: 10.1901/jeab.2012.97-29] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2010] [Accepted: 09/20/2011] [Indexed: 11/22/2022]
Abstract
We investigated changes in bias (preference for one response alternative) in signal detection when relative reinforcer frequency for correct responses varied across sessions. In Experiment 1, 4 rats responded in a two-stimulus, two-response identification procedure employing temporal stimuli (short vs. long houselight presentations). Relative reinforcer frequency varied according to a 31-step pseudorandom binary sequence and stimulus duration difference varied over two values across conditions. In Experiment 2, 3 rats responded in a five-stimulus, two-response classification procedure employing temporal stimuli. Relative reinforcer frequency was varied according to a 36-step pseudorandom ternary sequence. Results of both experiments were analyzed according to a behavioral model of detection. The model was extended to incorporate the effects of current and previous session reinforcer frequency ratios on current-session performance. Similar to findings with concurrent schedules, effects on bias of relative reinforcer frequency were highest for the current session. However, carryover from reinforcer ratios of previous sessions was evident. Generally, the results indicate that bias can come under control of frequent changes in relative reinforcer frequency in both identification and classification procedures.
Collapse
Affiliation(s)
- Blake A Hutsell
- Department of Psychology, 226 Thach Hall, 342 W. Thach Ave, Auburn University, Auburn, Alabama 36849-5212, USA.
| | | |
Collapse
|
17
|
Pigeon and human performance in a multi-armed bandit task in response to changes in variable interval schedules. Learn Behav 2011; 39:245-58. [PMID: 21380732 DOI: 10.3758/s13420-011-0025-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The tension between exploitation of the best options and exploration of alternatives is a ubiquitous problem that all organisms face. To examine this trade-off across species, pigeons and people were trained on an eight-armed bandit task in which the options were rewarded on a variable interval (VI) schedule. At regular intervals, each option's VI changed, thus encouraging dynamic increases in exploration in response to these anticipated changes. Both species showed sensitivity to the payoffs that was often well modeled by Luce's (1963) decision rule. For pigeons, exploration of alternative options was driven by experienced changes in the payoff schedules, not the beginning of a new session, even though each session signaled a new schedule. In contrast, people quickly learned to explore in response to signaled changes in the payoffs.
Collapse
|
18
|
Kyonka EGE, Grace RC. Rapid acquisition of choice and timing and the provenance of the terminal-link effect. J Exp Anal Behav 2011; 94:209-25. [PMID: 21451749 DOI: 10.1901/jeab.2010.94-209] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2009] [Accepted: 05/14/2010] [Indexed: 11/22/2022]
Abstract
Eight pigeons responded in a concurrent-chains procedure in which terminal-link schedules changed pseudorandomly across sessions. Pairs of terminal-link delays either summed to 15 s or to 45 s. Across sessions, the location of the shorter terminal link changed according to a pseudorandom binary sequence. On some terminal links, food was withheld to obtain start and stop times, measures of temporal control. Log initial-link response ratios stabilized within the first half of each session. Log response ratio was a monotonically-increasing but nonlinear function of programmed log terminal-link immediacy ratio. There was an effect of absolute terminal-link duration on log response ratio: For most subjects, preference for the relatively shorter terminal-link delay was stronger when absolute delays were long than when absolute delays were short. Polynomial regressions and model comparison showed that differences in degree of nonlinearity, not in sensitivity to log immediacy ratio, produced this effect. Temporal control of stop times was timescale invariant with scalar variability, but temporal control of start times was not consistent across subjects or terminal-link durations.
Collapse
Affiliation(s)
- Elizabeth G E Kyonka
- West Virginia University, Department of Psychology, Morgantown, WV 26506-6040, USA.
| | | |
Collapse
|
19
|
Rodewald AM, Hughes CE, Pitts RC. Development and maintenance of choice in a dynamic environment. J Exp Anal Behav 2011; 94:175-95. [PMID: 21451747 DOI: 10.1901/jeab.2010.94-175] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2009] [Accepted: 04/26/2010] [Indexed: 10/18/2022]
Abstract
Four pigeons were exposed to a concurrent procedure similar to that used by Davison, Baum, and colleagues (e.g., Davison & Baum, 2000, 2006) in which seven components were arranged in a mixed schedule, and each programmed a different left∶right reinforcer ratio (1∶27, 1∶9, 1∶3, 1∶1, 3∶1, 9∶1, 27∶1). Components within each session were presented randomly, lasted for 10 reinforcers each, and were separated by 10-s blackouts. These conditions were in effect for 100 sessions. When data were aggregated over Sessions 16-50, the present results were similar to those reported by Davison, Baum, and colleagues: (a) preference adjusted rapidly (i.e., sensitivity to reinforcement increased) within components; (b) preference for a given alternative increased with successive reinforcers delivered via that alternative (continuations), but was substantially attenuated following a reinforcer on the other alternative (a discontinuation); and (c) food deliveries produced preference pulses (immediate, local, increases in preference for the just-reinforced alternative). The same analyses were conducted across 10-session blocks for Sessions 1-100. In general, the basic structure of choice revealed by analyses of data from Sessions 16-50 was preserved at a smaller level of aggregation (10 sessions), and it developed rapidly (within the first 10 sessions). Some characteristics of choice, however, changed systematically across sessions. For example, effects of successive reinforcers within a component tended to increase across sessions, as did the magnitude and length of the preference pulses. Thus, models of choice under these conditions may need to take into account variations in behavior allocation that are not captured completely when data are aggregated over large numbers of sessions.
Collapse
Affiliation(s)
- Andrew M Rodewald
- Department of Psychology, Utah State University, Logan, Utah 84322, USA.
| | | | | |
Collapse
|
20
|
Abstract
Choice may be defined as the allocation of behavior among activities. Since all activities take up time, choice is conveniently thought of as the allocation of time among activities, even if activities like pecking are most easily measured by counting. Since dynamics refers to change through time, the dynamics of choice refers to change of allocation through time. In the dynamics of choice, as in other dynamical systems that include feedback, change is away from perturbation and toward a steady state. Steady state or equilibrium is assessed on a longer time scale than change because change is only visible on a smaller time scale. When we compare laws of equilibrium, such as the matching law with laws of dynamics, two possibilities emerge. Self-similarity occurs when the same law can be seen across smaller time scales, with the result that the law at longer time scales may be understood as the expression of its application at smaller time scales. Reduction occurs when the dynamics at a small time scale are incommensurate with the dynamics at longer time scales. Then the process at the longer time scale is reduced to a qualitatively different process at the smaller time scale, as when choice is reduced to switching patterns. When reduction occurs, the dynamics at the longer time scale may be derived from the process at the smaller time scale, but not the other way around. Research at different time scales is facilitated by the molar view of behavior.
Collapse
|
21
|
Christensen DR, Grace RC. A decision model for steady-state choice in concurrent chains. J Exp Anal Behav 2010; 94:227-40. [PMID: 21451750 PMCID: PMC2929087 DOI: 10.1901/jeab.2010.94-227] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2009] [Accepted: 05/18/2010] [Indexed: 10/18/2022]
Abstract
Grace and McLean (2006) proposed a decision model for acquisition of choice in concurrent chains which assumes that after reinforcement in a terminal link, subjects make a discrimination whether the preceding reinforcer delay was short or long relative to a criterion. Their model was subsequently extended by Christensen and Grace (2008, 2009a, 2009b) to include effects of initial- and terminal-link duration on choice. We show that an expression for steady-state responding can be derived from the decision model, which enables a model for choice that provides an account of archival data that is equal or superior to the contextual choice model (Grace, 1994) and hyperbolic value-added model (Mazur, 2001) in terms of goodness of fit, parsimony, and parameter invariance. The success of the steady-state decision model validates the strategy of understanding acquisition phenomena as a bridge toward explaining choice at the molar level.
Collapse
|
22
|
Rodewald AM, Hughes CE, Pitts RC. Choice in a variable environment: Effects of d-amphetamine on sensitivity to reinforcement. Behav Processes 2010; 84:460-4. [DOI: 10.1016/j.beproc.2010.02.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2009] [Revised: 02/10/2010] [Accepted: 02/14/2010] [Indexed: 11/29/2022]
|
23
|
Stilling ST, Critchfield TS. The matching relation and situation-specific bias modulation in professional football play selection. J Exp Anal Behav 2010; 93:435-54. [PMID: 21119855 PMCID: PMC2861879 DOI: 10.1901/jeab.2010.93-435] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Accepted: 02/21/2010] [Indexed: 10/18/2022]
Abstract
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models.
Collapse
|
24
|
Serial discrimination reversal learning in pigeons as a function of intertrial interval and delay of reinforcement. Learn Behav 2010; 38:96-102. [PMID: 20065353 DOI: 10.3758/lb.38.1.96] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pigeons learned a series of reversals of a simultaneous red-green visual discrimination. Delay of reinforcement (0 vs. 2 sec) and intertrial interval (ITI; 4 vs. 40 sec) were varied across blocks of reversals. Learning was faster with 0-sec than with 2-sec delays for both ITI values and faster with 4-sec ITIs than with 40-sec ITIs for both delays. Improvement in learning across successive reversals was evident throughout the experiment, furthermore, even after more than 120 reversals. The potent effects of small differences in reinforcement delay provide evidence for associative accounts and appear to be incompatible with accounts of choice that attempt to encompass the effects of temporal parameters in terms of animals' timing of temporal intervals.
Collapse
|
25
|
Rapid acquisition of preference in concurrent schedules: Effects of d-amphetamine on sensitivity to reinforcement amount. Behav Processes 2009; 81:238-43. [DOI: 10.1016/j.beproc.2009.01.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2008] [Revised: 12/21/2008] [Accepted: 01/02/2009] [Indexed: 11/21/2022]
|
26
|
Reid AK, Shahan TA, Grace RC. SQAB 2008: More than the usual suspects. Behav Processes 2009; 81:149-53. [DOI: 10.1016/j.beproc.2009.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
27
|
Response allocation in a rapid-acquisition concurrent-chains procedure: Effects of overall terminal-link duration. Behav Processes 2009; 81:233-7. [DOI: 10.1016/j.beproc.2009.01.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2008] [Revised: 01/19/2009] [Accepted: 01/21/2009] [Indexed: 11/21/2022]
|
28
|
Kyonka EGE, Grace RC. Effects of unpredictable changes in initial-link duration on choice and timing. Behav Processes 2009; 81:227-32. [PMID: 19429216 DOI: 10.1016/j.beproc.2008.12.024] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2008] [Revised: 12/22/2008] [Accepted: 12/22/2008] [Indexed: 11/16/2022]
Abstract
Four pigeons responded in a concurrent-chains procedure in which terminal-link schedules were fixed-interval (FI) 10s and FI 20s. Across sessions, the location of the shorter terminal-link changed according to a pseudorandom binary sequence. Each session, the variable-interval initial-link schedule value was sampled from a uniform distribution that ranged from 0.01 to 30s. On some terminal links, food was withheld to obtain measures of temporal control. Terminal-link delays determined choice (log initial-link response ratios) and timing (start and stop times on no-food trials) measures, which stabilized within the 1st half of each session. Preference for the shorter terminal-link delay was a monotonically decreasing function of initial-link duration. There was no evidence of control by initial-link durations from previous sessions.
Collapse
|
29
|
Christensen DR, Grace RC. Response allocation in concurrent chains when terminal-link delays follow an ascending and descending series. J Exp Anal Behav 2009; 91:1-20. [PMID: 19230509 PMCID: PMC2614812 DOI: 10.1901/jeab.2009.91-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2008] [Accepted: 09/13/2008] [Indexed: 11/22/2022]
Abstract
Eight pigeons were trained in a concurrent-chains procedure in which the terminal-link immediacy ratio followed an ascending or descending series. Across sessions, one terminal-link delay changed from 2 s to 32 s to 2 s or from 32 s to 2 s to 32 s, while the other was always 8 s. For all pigeons, response allocation tracked changes in delay and was biased towards the 8-s alternative on the descending series, indicating a hysteresis effect, and was more sensitive to changes in the terminal-link delay ratio for relatively long (> 8 s) than short (< 8 s) delays. Both the hysteresis and effect of delay duration were predicted by an extended version of Grace and McLean's (2006) decision model. The extended decision model provided an overall better account of the results than a simple linear-operator model (Grace, 2002), and holds promise for an integrated account of choice in concurrent chains for both acquisition and steady-state conditions.
Collapse
|
30
|
Soreth ME, Hineline PN. The probability of small schedule values and preference for random-interval schedules. J Exp Anal Behav 2009; 91:89-103. [PMID: 19230514 PMCID: PMC2614820 DOI: 10.1901/jeab.2009.91-89] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2008] [Accepted: 10/01/2008] [Indexed: 10/20/2022]
Abstract
Preference for working on variable schedules and temporal discrimination were simultaneously examined in two experiments using a discrete-trial, concurrent-chains arrangement with fixed interval (FI) and random interval (RI) terminal links. The random schedule was generated by first sampling a probability distribution after the programmed delay to reinforcement on the FI schedule had elapsed, and thus the RI never produced a component schedule value shorter than the FI and maintained a rate of reinforcement half that of the FI. Despite these features, the FI was not strongly preferred. The probability of obtaining the smallest programmed delay to reinforcement on the RI schedule was manipulated in Experiment 1, and the interaction of this probability and initial link length was examined in Experiment 2. As the probability of obtaining small values in the RI increased, preference for the schedule increased while the discriminated time of reinforcer availability in the terminal link decreased. Both of these effects were attenuated by lengthening the initial links. The results support the view that in addition to the delay to reinforcement, the probability of obtaining a short delay is an important choice-affecting variable that likely contributes to the robust preferences for variable, as opposed to fixed, schedules of reinforcement.
Collapse
|
31
|
Rapid acquisition in concurrent chains: Effects of initial-link duration. Behav Processes 2008; 78:217-23. [DOI: 10.1016/j.beproc.2008.01.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2007] [Accepted: 01/09/2008] [Indexed: 11/21/2022]
|
32
|
Kyonka EG. The matching law and effects of reinforcer rate and magnitude on choice in transition. Behav Processes 2008; 78:210-6. [DOI: 10.1016/j.beproc.2007.12.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2007] [Accepted: 12/04/2007] [Indexed: 11/24/2022]
|