1
|
Shimp CP. Molecular (moment‐to‐moment) and molar (aggregate) analyses of behavior. J Exp Anal Behav 2020; 114:394-429. [DOI: 10.1002/jeab.626] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 08/24/2020] [Accepted: 08/26/2020] [Indexed: 11/06/2022]
|
2
|
Nighbor TD, Oliver AC, Lattal KA. Resurgence without overall worsening of alternative reinforcement. Behav Processes 2020; 179:104219. [PMID: 32777262 DOI: 10.1016/j.beproc.2020.104219] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 08/03/2020] [Accepted: 08/04/2020] [Indexed: 01/05/2023]
Abstract
Three experiments were conducted with pigeons to assess discriminated periods of nonreinforcement as precipitators of resurgence. Each experiment occurred in three phases. In the Training phase, key-pecking was reinforced according to variable-interval schedules that alternated between two response keys (Experiment 1) or were concurrently available on two response keys (Experiments 2a & 2b). In the Alternative-Reinforcement phase, responding to one key was extinguished, while that to the other was reinforced according to tandem schedules. These then were replaced by chained schedules with the same programmed reinforcement rate in the Resurgence-Test phase. Resurgence occurred both when the signaled period of nonreinforcement was a darkened keylight in the terminal link of the chain schedule (Experiment 1) and a darkened keylight (Experiment 2a) or keylight color change (Experiment 2b) in the initial link of the chain schedule. Thus, signaled periods of extinction, without accompanying reductions in reinforcement rate, precipitated resurgence, suggesting that resurgence is not the result of worsening of overall reinforcement conditions, but also occurs when local conditions of reinforcement are worsened.
Collapse
|
3
|
Nighbor TD, Ozga-Hess JE, Anderson KG, Lattal KA. Contingency tracking during unsignaled delayed reinforcement: Effects of delay duration and d-amphetamine. J Exp Anal Behav 2019; 111:479-492. [PMID: 31038206 DOI: 10.1002/jeab.518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Accepted: 04/13/2019] [Indexed: 11/10/2022]
Abstract
In two experiments, the role of the response-reinforcer relation in maintaining low-rate responding under unsignaled delay conditions was investigated. In both experiments pecking by pigeons on one response key, denoted the relevant key, was reinforced under an unsignaled delay-of-reinforcement procedure (defined as tandem variable-interval (VI) differential-reinforcement-of-other behavior [DRO] schedule). Responding on a second key, denoted the irrelevant key, had no programmed consequences. Between sessions, the location of the relevant key varied (after one, two, or three sessions) pseudorandomly. In Experiment 1, the delay (DRO) duration was manipulated parametrically. Overall, proportional relevant-key response rates (relevant-key response rates / [relevant-key response rates + irrelevant key response rates]) increased across 3-session sequences in which the relevant key remained in the same location and decreased as the DRO duration was changed systematically (2, 5, and 10 s). In Experiment 2, acute administration of d-amphetamine increased proportional relevant-key response rates during 1-day sequences for only the DRO 5-s duration, and results over 3-day sequences, once a discrimination had already been established, were inconsistent. Results support that the response-reinforcer relation is the primary determinant of responding, and such discriminations are relatively resistant to disruption or potentiation by behaviorally active doses of d-amphetamine.
Collapse
|
4
|
Oliver AC, Nighbor TD, Lattal KA. Reinforcer magnitude and resurgence. J Exp Anal Behav 2018; 110:440-450. [PMID: 30431659 DOI: 10.1002/jeab.481] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 10/08/2018] [Indexed: 01/13/2023]
|
5
|
Nighbor TD, Kincaid SL, O'Hearn CM, Lattal KA. Stimulus contributions to operant resurgence. J Exp Anal Behav 2018; 110:243-251. [PMID: 30047134 DOI: 10.1002/jeab.463] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2018] [Accepted: 07/08/2018] [Indexed: 01/10/2023]
Abstract
In two experiments, pigeons were exposed to a three-phase resurgence procedure (train Response A; extinguish Response A and train Response B; extinguish Response B). In the first experiment, the stimuli associated with phases were different, resulting in a resurgence procedure combined with an ABC renewal procedure. Presenting the novel stimulus, C, during extinction of both responses in the third phase resulted in minimal resurgence. Subsequently, substituting the original training Stimulus A for Stimulus C resulted in resurgence with all pigeons. In the second experiment, resurgence with the same stimuli present in all three phases of the resurgence procedure (AAA) was compared concurrently with a resurgence procedure in which the ABC renewal procedure used in Experiment 1 was superimposed. Substantially more resurgence occurred with the AAA procedure compared to the ABC procedure. Although ABC renewal in combination with the resurgence procedure generated some resurgence, such recurrent responding was attenuated relative to that observed when the stimulus conditions were constant across phases. Combined with earlier research showing the enhancing effects of combining resurgence and ABA renewal procedures, the present results elaborate on how stimuli correlated with certain behavioral histories affect the course of operant resurgence.
Collapse
|
6
|
Gomes-Ng S, Landon J, Elliffe D, Bensemann J, Cowie S. The effects of changeover delays on local choice. Behav Processes 2018; 150:36-46. [DOI: 10.1016/j.beproc.2018.02.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/06/2017] [Accepted: 02/26/2018] [Indexed: 10/17/2022]
|
7
|
Killeen PR. The logistics of choice. J Exp Anal Behav 2015; 104:74-92. [DOI: 10.1002/jeab.156] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 04/20/2015] [Indexed: 11/09/2022]
|
8
|
Simultaneously Observing Concurrently-Available Schedules as a Means to Study the Near Miss Event in Simulated Slot Machine Gambling. PSYCHOLOGICAL RECORD 2014. [DOI: 10.1007/s40732-014-0095-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
9
|
Oliveira L, Green L, Myerson J. Pigeons' delay discounting functions established using a concurrent-chains procedure. J Exp Anal Behav 2014; 102:151-61. [PMID: 25044322 DOI: 10.1002/jeab.97] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Accepted: 06/15/2014] [Indexed: 11/12/2022]
Abstract
Several studies have examined discounting by pigeons and rats using concurrent-chains procedures, but the results have been inconsistent. None of these studies, however, has established that discounting functions derived from estimates of indifference points can be obtained with a concurrent-chains procedure, so their validity remains in doubt. The present study used a concurrent-chains procedure within sessions combined with an adjusting-amount procedure across sessions to determine the present, subjective values of food reinforcers to be obtained after a delay. Discounting was well described by the hyperbolic discounting function, suggesting that the concurrent-chains procedure and the more typical adjusting-amount procedure are measuring the same process. Consistent with previous studies with rats and pigeons using adjusting-amount procedures, no significant effect of the amount of the delayed reinforcer on the degree of discounting was observed, suggesting that the amount effect may be unique to humans although consistent with the view that animals' choices are controlled by the relative, rather than the absolute, value of reinforcers.
Collapse
|
10
|
Response elimination, reinforcement rate and resurgence of operant behavior. Behav Processes 2013; 100:91-102. [DOI: 10.1016/j.beproc.2013.07.027] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Revised: 07/11/2013] [Accepted: 07/27/2013] [Indexed: 11/22/2022]
|
11
|
Podlesnik CA, Jimenez-Gomez C, Shahan TA. Are preference and resistance to change convergent expressions of stimulus value? J Exp Anal Behav 2013; 100:27-48. [DOI: 10.1002/jeab.33] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Accepted: 05/06/2013] [Indexed: 11/06/2022]
|
12
|
Cançado CRX, Lattal KA. Resurgence of temporal patterns of responding. J Exp Anal Behav 2011; 95:271-87. [PMID: 21547067 DOI: 10.1901/jeab.2011.95-271] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2010] [Accepted: 01/26/2010] [Indexed: 10/15/2022]
Abstract
The resurgence of temporal patterns of key pecking by pigeons was investigated in two experiments. In Experiment 1, positively accelerated and linear patterns of responding were established on one key under a discrete-trial multiple fixed-interval variable-interval schedule. Subsequently, only responses on a second key produced reinforcers according to a variable-interval schedule. When reinforcement on the second key was discontinued, positively accelerated and linear response patterns resurged on the first key, in the presence of the stimuli previously correlated with the fixed- and variable-interval schedules, respectively. In Experiment 2, resurgence was assessed after temporal patterns were directly reinforced. Initially, responding was reinforced if it approximated an algorithm-defined temporal pattern during trials. Subsequently, reinforcement depended on pausing during trials and, when it was discontinued, resurgence of previously reinforced patterns occurred for each pigeon and for 2 of 3 pigeons during a replication. The results of both experiments demonstrate the resurgence of temporally organized responding and replicate and extend previous findings on resurgence of discrete responses and spatial response sequences.
Collapse
Affiliation(s)
- Carlos R X Cançado
- Department of Psychology, West Virginia University, Morgantown, WV 26506-6040, USA.
| | | |
Collapse
|
13
|
Abstract
The contribution of past experiences to concurrent resurgence was investigated in three experiments. In Experiment 1, resurgence was related to the length of reinforcement history as well as the reinforcement schedule that previously maintained responding. Specifically, more resurgence occurred when key pecks had been reinforced on a variable-interval 1-min schedule than a variable-interval 6-min schedule, but this effect may have been due either to the differential reinforcement rates or differential response rates under the two schedules. When reinforcement rates were similar (Experiment 2), there was more resurgence of high-rate than low-rate responding. When response rates were similar (Experiment 3), resurgence was not related systematically to prior reinforcement rates. Taken together, these three experimental tests of concurrent resurgence illustrate that prior response rates are better predictors of resurgence than are prior reinforcement rates.
Collapse
|
14
|
Corrado GS, Sugrue LP, Seung HS, Newsome WT. Linear-Nonlinear-Poisson models of primate choice dynamics. J Exp Anal Behav 2006; 84:581-617. [PMID: 16596981 PMCID: PMC1389782 DOI: 10.1901/jeab.2005.23-05] [Citation(s) in RCA: 112] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency.
Collapse
Affiliation(s)
- Greg S Corrado
- Howard Hughes Medical Institute, Stanford University School of Medicine, California 94309, USA.
| | | | | | | |
Collapse
|
15
|
Schneider SM, Davison M. Molecular order in concurrent response sequences. Behav Processes 2006; 73:187-98. [PMID: 16793219 DOI: 10.1016/j.beproc.2006.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2005] [Revised: 05/09/2006] [Accepted: 05/12/2006] [Indexed: 11/17/2022]
Abstract
We studied the order of emission of concurrently reinforced free-operant two-response sequences such as left-left (LL) and left-right (LR). The end of each sequence was demarcated by stimulus change. The use of demarcated sequences of responses, as opposed to individual responses, provides an expanded set of distinct, temporally ordered behaviour pairings to investigate (e.g., LL followed by LL, LL followed by LR, etc.); it is as well a real-life analogue. A sequential analysis of new and existing rat and pigeon data revealed patterns in both overall and post-reinforcer-only sequence emission order. These patterns were consistent across species and individuals, and they followed higher-order organising principles. We describe sequence non-repetition, last-response repetition, and the proportion and post-reinforcer effects, and relate them to existing molar and molecular behaviour principles. Beyond their immediate implications, our results illustrate the value of sequential analysis as a tool for the investigation of molar-molecular behavioural relations.
Collapse
Affiliation(s)
- Susan M Schneider
- Department of Psychology, Florida International University, Miami, FL 33199, USA.
| | | |
Collapse
|
16
|
MacDonall JS. Earning and obtaining reinforcers under concurrent interval scheduling. J Exp Anal Behav 2006; 84:167-83. [PMID: 16262185 PMCID: PMC1243978 DOI: 10.1901/jeab.2005.76-04] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Contingencies of reinforcement specify how reinforcers are earned and how they are obtained. Ratio contingencies specify the number of responses that earn a reinforcer, and the response satisfying the ratio requirement obtains the earned reinforcer. Simple interval schedules specify that a certain time earns a reinforcer, which is obtained by the first response after the interval. The earning of reinforcers has been overlooked, perhaps because simple schedules confound the rates of earning reinforcers with the rates of obtaining reinforcers. In concurrent variable-interval schedules, however, spending time at one alternative earns reinforcers not only at that alternative, but at the other alternative as well. Reinforcers earned for delivery at the other alternative are obtained after changing over. Thus the rates of earning reinforcers are not confounded with the rate of obtaining reinforcers, but the rates of earning reinforcers are the same at both alternatives, which masks their possibly differing effects on preference. Two experiments examined the separate effects of earning reinforcers and of obtaining reinforcers on preference by using concurrent interval schedules composed of two pairs of stay and switch schedules (MacDonall, 2000). In both experiments, the generalized matching law, which is based on rates of obtaining reinforcers, described responding only when rates of earning reinforcers were the same at each alternative. An equation that included both the ratio of the rates of obtaining reinforcers and the ratio of the rates of earning reinforcers described the results from all conditions from each experiment.
Collapse
Affiliation(s)
- James S MacDonall
- Departmentof Psychology, Fordham University, Bronx, New York 10458, USA.
| |
Collapse
|
17
|
da Silva SP, Lattal KA. Contextual determinants of temporal control: Behavioral contrast in a free-operant psychophysical procedure. Behav Processes 2005; 71:157-63. [PMID: 16364564 DOI: 10.1016/j.beproc.2005.11.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2005] [Revised: 11/11/2005] [Accepted: 11/20/2005] [Indexed: 11/15/2022]
Abstract
The question of how temporal control of responding might be influenced by contingency changes in other contexts was investigated. Each of three pigeons first was exposed to a two-component multiple schedule in which a two-key free-operant psychophysical procedure operated in one component and a variable-interval schedule operated in the other component. The variable-interval schedule then was changed to extinction while the free-operant psychophysical procedure remained unchanged. Finally, the variable-interval schedule was reintroduced. Response rates on the left key and the estimated temporal threshold under the free-operant psychophysical procedure increased for each pigeon when the alternate component schedule was changed to extinction and then decreased again when the variable-interval schedule was reintroduced. The results suggest one way that temporal control is affected by its context, and may be interpreted through the direct effects of overall reinforcement rate on temporal control mechanisms or the disruptive effects of alternative sources of reinforcement on temporally controlled behavior.
Collapse
|
18
|
Abstract
Choice typically is studied by exposing organisms to concurrent variable-interval schedules in which not only responses controlled by stimuli on the key are acquired but also switching responses and likely other operants as well. In the present research, discriminated key-pecking responses in pigeons were first acquired using a multiple schedule that minimized the reinforcement of switching operants. Then, choice was assessed during concurrent-probe periods in which pairs of discriminative stimuli were presented concurrently. Upon initial exposure to concurrently presented stimuli, choice approximated exclusive preference for the alternative associated with the higher reinforcement frequency. Concurrent schedules were then implemented that gave increasingly greater opportunities for switching operants to be conditioned. As these operants were acquired, the relation of relative response frequency to relative reinforcement frequency converged toward a matching relation. An account of matching with concurrent schedules is proposed in which responding exclusively to the discriminative stimulus associated with the higher reinforcement frequency declines as the concurrent stimuli become more similar and other operants-notably switching-are acquired and generalize to stimuli from both alternatives. The concerted effect of these processes fosters an approximate matching relation in commonly used concurrent procedures.
Collapse
Affiliation(s)
- Michael A Crowley
- Department of Psychology, University of Massachusetts at Amherst 01003, USA.
| | | |
Collapse
|
19
|
Cardinal RN, Cheung THC. Nucleus accumbens core lesions retard instrumental learning and performance with delayed reinforcement in the rat. BMC Neurosci 2005; 6:9. [PMID: 15691387 PMCID: PMC549214 DOI: 10.1186/1471-2202-6-9] [Citation(s) in RCA: 70] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2004] [Accepted: 02/03/2005] [Indexed: 11/22/2022] Open
Abstract
Background Delays between actions and their outcomes severely hinder reinforcement learning systems, but little is known of the neural mechanism by which animals overcome this problem and bridge such delays. The nucleus accumbens core (AcbC), part of the ventral striatum, is required for normal preference for a large, delayed reward over a small, immediate reward (self-controlled choice) in rats, but the reason for this is unclear. We investigated the role of the AcbC in learning a free-operant instrumental response using delayed reinforcement, performance of a previously-learned response for delayed reinforcement, and assessment of the relative magnitudes of two different rewards. Results Groups of rats with excitotoxic or sham lesions of the AcbC acquired an instrumental response with different delays (0, 10, or 20 s) between the lever-press response and reinforcer delivery. A second (inactive) lever was also present, but responding on it was never reinforced. As expected, the delays retarded learning in normal rats. AcbC lesions did not hinder learning in the absence of delays, but AcbC-lesioned rats were impaired in learning when there was a delay, relative to sham-operated controls. All groups eventually acquired the response and discriminated the active lever from the inactive lever to some degree. Rats were subsequently trained to discriminate reinforcers of different magnitudes. AcbC-lesioned rats were more sensitive to differences in reinforcer magnitude than sham-operated controls, suggesting that the deficit in self-controlled choice previously observed in such rats was a consequence of reduced preference for delayed rewards relative to immediate rewards, not of reduced preference for large rewards relative to small rewards. AcbC lesions also impaired the performance of a previously-learned instrumental response in a delay-dependent fashion. Conclusions These results demonstrate that the AcbC contributes to instrumental learning and performance by bridging delays between subjects' actions and the ensuing outcomes that reinforce behaviour.
Collapse
Affiliation(s)
- Rudolf N Cardinal
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| | - Timothy HC Cheung
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
- Psychopharmacology Section, Division of Psychiatry, B Floor, Medical School, Queen's Medical Centre, Nottingham NG7 2UH, UK
| |
Collapse
|
20
|
Abstract
Performance on concurrent schedules can be decomposed to run lengths (the number of responses before switching alternatives), or visit durations (time at an alternative before switching alternatives), that are a function of the ratio of the rates of reinforcement for staying and switching. From this analysis, a model of concurrent performance was developed and examined in two experiments. The first exposed rats to variable-interval schedules for staying and for switching, which included a changeover delay for reinforcers following a switch. With the changeover delay, run lengths and visit durations were functions of the ratios of the rates of reinforcement for staying and for switching, as found by previous research not using a changeover delay. The second directly assessed the effect of a changeover delay on run lengths and visit durations. Each component of a multiple schedule consisted of equivalent stay and switch schedules but only one component included a changeover delay. Run lengths and visit durations were longer when a changeover delay was used. Because visit duration is the reciprocal of changeover rate, these results are consistent with the established finding that a changeover delay reduces the frequency of switching. Together these results support the local model of concurrent performance as an alternative to the generalized matching law as a model of concurrent performance. The local model may be preferred when accounting for more molecular aspects of concurrent performance.
Collapse
Affiliation(s)
- James S MacDonall
- Department of Psychology, Fordham University, Bronx, New York 10458, USA.
| |
Collapse
|
21
|
Abstract
In three experiments, pigeons were used to examine the independent effects of two normally confounded delays to reinforcement associated with changing between concurrently available variable-interval schedules of reinforcement. In Experiments 1 and 2, combinations of changeover-delay durations and fixed-interval travel requirements were arranged in a changeover-key procedure. The delay from a changeover-produced stimulus change to a reinforcer was varied while the delay between the last response on one alternative and a reinforcer on the other (the total obtained delay) was held constant. Changeover rates decreased as a negative power function of the total obtained delay. The delay between a changeover-produced stimulus change had a small and inconsistent effect on changeover rates. In Experiment 3, changeover delays and fixed-interval travel requirements were arranged independently. Changeover rates decreased as a negative power function of the total obtained delay despite variations in the delay from a change in stimulus conditions to a reinforcer. Periods of high-rate responding following a changeover, however, were higher near the end of the delay from a change in stimulus conditions to a reinforcer. The results of these experiments suggest that the effects of changeover delays and travel requirements primarily result from changes in the delay between a response at one alternative and a reinforcer at the other, but the pattern of responding immediately after a changeover depends on the delay from a changeover-produced change in stimulus conditions to a reinforcer.
Collapse
|