Reinforcer

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1493367 Experts worldwide ranked by ideXlab platform

Mark E. Bouton - One of the best experts on this subject based on the ideXlab platform.

  • Some factors that restore goal-direction to a habitual behavior
    Neurobiology of learning and memory, 2020
    Co-Authors: Sydney Trask, Megan L. Shipman, John T. Green, Mark E. Bouton
    Abstract:

    Recent findings from our laboratory suggest that an extensively-practiced instrumental behavior can appear to be a goal-directed action (rather than a habit) when a second behavior is added and reinforced during intermixed final sessions (Shipman et al., 2018). The present experiments were designed to explore and understand this finding. All used the taste aversion method of devaluing the Reinforcer to distinguish between goal-directed actions and habits. Experiment 1 confirmed that reinforcing a second response in a separate context (but not mere exposure to that context) can return an extensively-trained habit to the status of goal-directed action. Experiment 2 showed that training of the second response needs to be intermixed with training of the first response to produce this effect; training the second response after the first-response training was complete preserved the first response as a habit. Experiment 3 demonstrated that reinforcing the second response with a different Reinforcer breaks the habit status of the first response. Experiment 4 found that free Reinforcers (that were not response-contingent) were sufficient to restore goal-directed performance. Together, the results suggest that unexpected Reinforcer delivery can render a habitual response goal-directed again.

  • Stimulus control of actions and habits: A role for Reinforcer predictability and attention in the development of habitual behavior.
    Journal of experimental psychology. Animal learning and cognition, 2018
    Co-Authors: Eric A. Thrailkill, Sydney Trask, Pedro Vidal, José A. Alcalá, Mark E. Bouton
    Abstract:

    Goal-directed actions are instrumental behaviors whose performance depends on the organism's knowledge of the reinforcing outcome's value. In contrast, habits are instrumental behaviors that are insensitive to the outcome's current value. Although habits in everyday life are typically controlled by stimuli that occasion them, most research has studied habits using free-operant procedures in which no discrete stimuli are present to occasion the response. We therefore studied habit learning when rats were reinforced for lever pressing on a random-interval 30-s schedule in the presence of a discriminative stimulus (S) but not in its absence. In Experiment 1, devaluing the Reinforcer with taste aversion conditioning weakened instrumental responding in a 30-s S after 4, 22, and 66 sessions of instrumental training. Even extensive practice thus produced goal-directed action, not habit. Experiments 2 and 3 contrastingly found habit when the duration of S was increased from 30 s to 8 min. Experiment 4 then found habit with the 30-s S when it always contained a Reinforcer; goal-directed action was maintained when Reinforcers were earned at the same rate but occurred in only 50% of Ss (as in the previous experiments). The results challenge the view that habits are an inevitable consequence of repeated reinforcement (as in the law of effect) and instead suggest that discriminated habits develop when the Reinforcer becomes predictable. Under those conditions, organisms may pay less attention to their behavior, much as they pay less attention to signals associated with predicted Reinforcers in Pavlovian conditioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  • discriminative properties of the Reinforcer can be used to attenuate the renewal of extinguished operant behavior
    Learning & Behavior, 2016
    Co-Authors: Sydney Trask, Mark E. Bouton
    Abstract:

    Previous research on the resurgence effect has suggested that Reinforcers that are presented during the extinction of an operant behavior can control inhibition of the response. To further test this hypothesis, in three experiments with rat subjects we examined the effectiveness of using Reinforcers that were presented during extinction as a means of attenuating or inhibiting the operant renewal effect. In Experiment 1, lever pressing was reinforced in Context A, extinguished in Context B, and then tested in Context A. Renewal of responding that occurred during the final test was attenuated when a distinct Reinforcer that had been presented independent of responding during extinction was also presented during the renewal test. Experiment 2 established that this effect depended on the Reinforcer being featured as a part of extinction (and thus associated with response inhibition). Experiment 3 then showed that the Reinforcers presented during extinction suppressed performance in both the extinction and renewal contexts; the effects of the physical and Reinforcer contexts were additive. Together, the results further suggest that Reinforcers associated with response inhibition can serve a discriminative role in suppressing behavior and may be an effective stimulus that can attenuate operant relapse.

  • Role of the discriminative properties of the Reinforcer in resurgence.
    Learning & Behavior, 2015
    Co-Authors: Mark E. Bouton, Sydney Trask
    Abstract:

    In three experiments with rat subjects, we examined the effects of the discriminative effects of Reinforcers that were presented during or after operant extinction. Experiments 1 and 2 examined resurgence, in which an extinguished operant response (R1) recovers when a second behavior (R2) that has been reinforced to replace it is also placed in extinction. The results of Experiment 1 suggest that the amount of R1’s resurgence is a decreasing linear function of the interreinforcement interval used during the reinforcement of R2. In Experiment 2, R1 was reinforced with one outcome (O1), and R2 was then reinforced with a second outcome (O2) while R1 was extinguished. In resurgence tests, response-independent (noncontingent) presentations of O2 prevented resurgence of R1, which otherwise occurred when testing was conducted with either no Reinforcers or noncontingent presentations of O1. In Experiment 3, we then examined the effects of noncontingent O1 and O2 presentations after simple extinction in either the presence or the absence of noncontingent presentations of O2. Overall, the results are consistent with a role for the discriminative properties of the Reinforcer in controlling operant behavior. In resurgence, the Reinforcer used during response elimination provides a distinct context that controls the inhibition of R1. The results are less consistent with an alternative view emphasizing the disrupting effects of alternative reinforcement.

  • separation of time based and trial based accounts of the partial reinforcement extinction effect
    Behavioural Processes, 2014
    Co-Authors: Mark E. Bouton, Amanda M Woods, Travis P Todd
    Abstract:

    Two appetitive conditioning experiments with rats examined time-based and trial-based accounts of the partial reinforcement extinction effect (PREE). In the PREE, the loss of responding that occurs in extinction is slower when the conditioned stimulus (CS) has been paired with a Reinforcer on some of its presentations (partially reinforced) instead of every presentation (continuously reinforced). According to a time-based or "time-accumulation" view (e.g., Gallistel and Gibbon, 2000), the PREE occurs because the organism has learned in partial reinforcement to expect the Reinforcer after a larger amount of time has accumulated in the CS over trials. In contrast, according to a trial-based view (e.g., Capaldi, 1967), the PREE occurs because the organism has learned in partial reinforcement to expect the Reinforcer after a larger number of CS presentations. Experiment 1 used a procedure that equated partially and continuously reinforced groups on their expected times to reinforcement during conditioning. A PREE was still observed. Experiment 2 then used an extinction procedure that allowed time in the CS and the number of trials to accumulate differentially through extinction. The PREE was still evident when responding was examined as a function of expected time units to the Reinforcer, but was eliminated when responding was examined as a function of expected trial units to the Reinforcer. There was no evidence that the animal responded according to the ratio of time accumulated during the CS in extinction over the time in the CS expected before the Reinforcer. The results thus favor a trial-based account over a time-based account of extinction and the PREE. This article is part of a Special Issue entitled: Associative and Temporal Learning.

Sydney Trask - One of the best experts on this subject based on the ideXlab platform.

  • Some factors that restore goal-direction to a habitual behavior
    Neurobiology of learning and memory, 2020
    Co-Authors: Sydney Trask, Megan L. Shipman, John T. Green, Mark E. Bouton
    Abstract:

    Recent findings from our laboratory suggest that an extensively-practiced instrumental behavior can appear to be a goal-directed action (rather than a habit) when a second behavior is added and reinforced during intermixed final sessions (Shipman et al., 2018). The present experiments were designed to explore and understand this finding. All used the taste aversion method of devaluing the Reinforcer to distinguish between goal-directed actions and habits. Experiment 1 confirmed that reinforcing a second response in a separate context (but not mere exposure to that context) can return an extensively-trained habit to the status of goal-directed action. Experiment 2 showed that training of the second response needs to be intermixed with training of the first response to produce this effect; training the second response after the first-response training was complete preserved the first response as a habit. Experiment 3 demonstrated that reinforcing the second response with a different Reinforcer breaks the habit status of the first response. Experiment 4 found that free Reinforcers (that were not response-contingent) were sufficient to restore goal-directed performance. Together, the results suggest that unexpected Reinforcer delivery can render a habitual response goal-directed again.

  • Stimulus control of actions and habits: A role for Reinforcer predictability and attention in the development of habitual behavior.
    Journal of experimental psychology. Animal learning and cognition, 2018
    Co-Authors: Eric A. Thrailkill, Sydney Trask, Pedro Vidal, José A. Alcalá, Mark E. Bouton
    Abstract:

    Goal-directed actions are instrumental behaviors whose performance depends on the organism's knowledge of the reinforcing outcome's value. In contrast, habits are instrumental behaviors that are insensitive to the outcome's current value. Although habits in everyday life are typically controlled by stimuli that occasion them, most research has studied habits using free-operant procedures in which no discrete stimuli are present to occasion the response. We therefore studied habit learning when rats were reinforced for lever pressing on a random-interval 30-s schedule in the presence of a discriminative stimulus (S) but not in its absence. In Experiment 1, devaluing the Reinforcer with taste aversion conditioning weakened instrumental responding in a 30-s S after 4, 22, and 66 sessions of instrumental training. Even extensive practice thus produced goal-directed action, not habit. Experiments 2 and 3 contrastingly found habit when the duration of S was increased from 30 s to 8 min. Experiment 4 then found habit with the 30-s S when it always contained a Reinforcer; goal-directed action was maintained when Reinforcers were earned at the same rate but occurred in only 50% of Ss (as in the previous experiments). The results challenge the view that habits are an inevitable consequence of repeated reinforcement (as in the law of effect) and instead suggest that discriminated habits develop when the Reinforcer becomes predictable. Under those conditions, organisms may pay less attention to their behavior, much as they pay less attention to signals associated with predicted Reinforcers in Pavlovian conditioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  • discriminative properties of the Reinforcer can be used to attenuate the renewal of extinguished operant behavior
    Learning & Behavior, 2016
    Co-Authors: Sydney Trask, Mark E. Bouton
    Abstract:

    Previous research on the resurgence effect has suggested that Reinforcers that are presented during the extinction of an operant behavior can control inhibition of the response. To further test this hypothesis, in three experiments with rat subjects we examined the effectiveness of using Reinforcers that were presented during extinction as a means of attenuating or inhibiting the operant renewal effect. In Experiment 1, lever pressing was reinforced in Context A, extinguished in Context B, and then tested in Context A. Renewal of responding that occurred during the final test was attenuated when a distinct Reinforcer that had been presented independent of responding during extinction was also presented during the renewal test. Experiment 2 established that this effect depended on the Reinforcer being featured as a part of extinction (and thus associated with response inhibition). Experiment 3 then showed that the Reinforcers presented during extinction suppressed performance in both the extinction and renewal contexts; the effects of the physical and Reinforcer contexts were additive. Together, the results further suggest that Reinforcers associated with response inhibition can serve a discriminative role in suppressing behavior and may be an effective stimulus that can attenuate operant relapse.

  • Role of the discriminative properties of the Reinforcer in resurgence.
    Learning & Behavior, 2015
    Co-Authors: Mark E. Bouton, Sydney Trask
    Abstract:

    In three experiments with rat subjects, we examined the effects of the discriminative effects of Reinforcers that were presented during or after operant extinction. Experiments 1 and 2 examined resurgence, in which an extinguished operant response (R1) recovers when a second behavior (R2) that has been reinforced to replace it is also placed in extinction. The results of Experiment 1 suggest that the amount of R1’s resurgence is a decreasing linear function of the interreinforcement interval used during the reinforcement of R2. In Experiment 2, R1 was reinforced with one outcome (O1), and R2 was then reinforced with a second outcome (O2) while R1 was extinguished. In resurgence tests, response-independent (noncontingent) presentations of O2 prevented resurgence of R1, which otherwise occurred when testing was conducted with either no Reinforcers or noncontingent presentations of O1. In Experiment 3, we then examined the effects of noncontingent O1 and O2 presentations after simple extinction in either the presence or the absence of noncontingent presentations of O2. Overall, the results are consistent with a role for the discriminative properties of the Reinforcer in controlling operant behavior. In resurgence, the Reinforcer used during response elimination provides a distinct context that controls the inhibition of R1. The results are less consistent with an alternative view emphasizing the disrupting effects of alternative reinforcement.

Timothy A. Shahan - One of the best experts on this subject based on the ideXlab platform.

  • Resurgence and alternative-Reinforcer magnitude.
    Journal of the Experimental Analysis of Behavior, 2017
    Co-Authors: Andrew R. Craig, Kaitlyn O. Browning, Rusty W. Nall, Ciara M. Marshall, Timothy A. Shahan
    Abstract:

    : Resurgence is defined as an increase in the frequency of a previously reinforced target response when an alternative source of reinforcement is suspended. Despite an extensive body of research examining factors that affect resurgence, the effects of alternative-Reinforcer magnitude have not been examined. Thus, the present experiments aimed to fill this gap in the literature. In Experiment 1, rats pressed levers for single-pellet Reinforcers during Phase 1. In Phase 2, target-lever pressing was extinguished, and alternative-lever pressing produced either five-pellet, one-pellet, or no alternative reinforcement. In Phase 3, alternative reinforcement was suspended to test for resurgence. Five-pellet alternative reinforcement produced faster elimination and greater resurgence of target-lever pressing than one-pellet alternative reinforcement. In Experiment 2, effects of decreasing alternative-Reinforcer magnitude on resurgence were examined. Rats pressed levers and pulled chains for six-pellet Reinforcers during Phases 1 and 2, respectively. In Phase 3, alternative reinforcement was decreased to three pellets for one group, one pellet for a second group, and suspended altogether for a third group. Shifting from six-pellet to one-pellet alternative reinforcement produced as much resurgence as suspending alternative reinforcement altogether, while shifting from six pellets to three pellets did not produce resurgence. These results suggest that alternative-Reinforcer magnitude has effects on elimination and resurgence of target behavior that are similar to those of alternative-Reinforcer rate. Thus, both suppression of target behavior during alternative reinforcement and resurgence when conditions of alternative reinforcement are altered may be related to variables that affect the value of the alternative-reinforcement source.

  • Reinforcer satiation and resistance to change of responding maintained by qualitatively different Reinforcers.
    Behavioural Processes, 2008
    Co-Authors: Christopher A. Podlesnik, Timothy A. Shahan
    Abstract:

    In previous research on resistance to change, differential disruption of operant behavior by satiation has been used to assess the relative strength of responding maintained by different rates or magnitudes of the same Reinforcer in different stimulus contexts. The present experiment examined resistance to disruption by satiation of one Reinforcer type when qualitatively different Reinforcers were arranged in different contexts. Rats earned either food pellets or a 15% sucrose solution on variable-interval 60-s schedules of reinforcement in the two components of a multiple schedule. Resistance to satiation was assessed by providing free access either to food pellets or the sucrose solution prior to or during sessions. Responding systematically decreased more relative to baseline in the component associated with the satiated Reinforcer. These findings suggest that when qualitatively different Reinforcers maintain responding, relative resistance to change depends upon the relations between Reinforcers and disrupter types.

  • Response-Reinforcer Relations and Resistance to Change
    Behavioural Processes, 2007
    Co-Authors: Christopher A. Podlesnik, Timothy A. Shahan
    Abstract:

    Behavioral momentum theory suggests that the relation between a response and a Reinforcer (i.e., response-Reinforcer relation) governs response rates and the relation between a stimulus and a Reinforcer (i.e., stimulus-Reinforcer relation) governs resistance to change. The present experiments compared the effects degrading response-Reinforcer relations with response-independent or delayed Reinforcers on resistance to change in conditions with equal stimulus-Reinforcer relations. In Experiment 1, pigeons responded on equal variable-interval schedules of immediate reinforcement in three components of a multiple schedule. Additional response-independent Reinforcers were available in one component and additional delayed Reinforcers were available in another component. The results showed that resistance to disruption was greater in the components with added Reinforcers than without them (i.e., better stimulus-Reinforcer relations), but did not differ for the components with added response-independent and delayed reinforcement. In Experiment 2, a component presenting immediate reinforcement alternated with either a component that arranged equal rates of reinforcement with a proportion of those Reinforcers being response independent or a component with a proportion of the Reinforcers being delayed. Results showed that resistance to disruption tended to be either similar across components or slightly lower when response-Reinforcer relations were degraded with either response-independent or delayed Reinforcers. These findings suggest that degrading response-Reinforcer relations can impact resistance to change, but that impact does not depend on the specific method and is small relative to the effects of the stimulus-Reinforcer relation.

  • the observing response procedure a novel method to study drug associated conditioned reinforcement
    Experimental and Clinical Psychopharmacology, 2002
    Co-Authors: Timothy A. Shahan
    Abstract:

    In this experiment, the observing-response procedure was adapted for use with drug self-administration. Rats' responding for oral ethanol was sometimes reinforced on a random-ratio schedule, whereas at other times it had no effect (i.e., extinction). Behavior producing stimuli associated with the otherwise unsignaled random-ratio and extinction periods (i.e., observing behavior) was acquired and maintained. In a vehicle control condition, both self-administration and observing behavior decreased, but observing decreased less rapidly proportionally to baseline than vehicle consumption. Thus, conditioned Reinforcers may have persistent effects that are relatively independent of the current status of the primary Reinforcer. The procedure allows long-term study of drug-associated conditioned reinforcement and provides independent indexes of the conditioned reinforcing and discriminative stimulus effects of drug stimuli.

Kenneth A Perkins - One of the best experts on this subject based on the ideXlab platform.

  • tobacco smoking may delay habituation of Reinforcer effectiveness in humans
    Psychopharmacology, 2018
    Co-Authors: Joshua L Karelitz, Kenneth A Perkins
    Abstract:

    The effectiveness of nonconsummatory Reinforcers habituate, as their ability to maintain reinforced responding declines over repeated presentations. Preclinical research has shown that nicotine can delay habituation of Reinforcer effectiveness, but this effect has not been directly demonstrated in humans. In preliminary translational research, we assessed effects of nicotine from tobacco smoking (vs. a no smoking control) on within-session patterns of responding for a brief visual Reinforcer. Using a within-subjects design, 32 adult dependent smokers participated in two experimental sessions, varying by smoking condition: no smoking following overnight abstinence (verified by CO ≤ 10 ppm), or smoking of own cigarette without overnight abstinence. Adapted from preclinical studies, habituation of Reinforcer effectiveness was assessed by determining the rate of decline in responding on a simple operant computer task for a visual Reinforcer, available on a fixed ratio schedule. Reinforced responding and duration of responding were each significantly higher in the smoking vs. no smoking condition. The within-session rate of responding declined significantly more slowly during the smoking vs. no smoking condition, consistent with delayed habituation of Reinforcer effectiveness. Follow-up analyses indicated that withdrawal relief did not influence the difference in responding between conditions, suggesting the patterns of responding reflected positive, but not negative, reinforcement. These results are a preliminary demonstration in humans that smoked nicotine may attenuate habituation, thereby maintaining the effectiveness of a Reinforcer over a longer period of access. Further research is needed to confirm habituation and rule out alternative causes of declines in within-session responding.

Elizabeth A. Phelps - One of the best experts on this subject based on the ideXlab platform.

  • neural systems underlying aversive conditioning in humans with primary and secondary Reinforcers
    Frontiers in Neuroscience, 2011
    Co-Authors: Mauricio R Delgado, Elizabeth A. Phelps
    Abstract:

    Money is a secondary Reinforcer commonly used across a range of disciplines in experimental paradigms investigating reward learning and decision-making. The effectiveness of monetary Reinforcers during aversive learning and associated neural basis, however, remains a topic of debate. Specifically, it is unclear if the initial acquisition of aversive representations of monetary losses depends on similar neural systems as more traditional aversive conditioning that involves primary Reinforcers. This study contrasts the efficacy of a biologically defined primary Reinforcer (shock) and a socially defined secondary Reinforcer (money) during aversive learning and its associated neural circuitry. During a two-part experiment, participants first played a gambling game where wins and losses were based on performance to gain an experimental bank. Participants were then exposed to two separate aversive conditioning sessions. In one session, a primary Reinforcer (mild shock) served as an unconditioned stimulus (US) and was paired with one of two colored squares, the conditioned stimuli (CS+ and CS−, respectively). In another session, a secondary Reinforcer (loss of money) served as the US and was paired with one of two different CS. Skin conductance responses were greater for CS+ compared to CS− trials irrespective of type of Reinforcer. Neuroimaging results revealed that the striatum, a region typically linked with reward-related processing, was found to be involved in the acquisition of aversive conditioned response irrespective of Reinforcer type. In contrast, the amygdala was involved during aversive conditioning with primary Reinforcers, as suggested by both an exploratory fMRI analysis and a follow-up case study with a patient with bilateral amygdala damage. Taken together, these results suggest that learning about potential monetary losses may depend on reinforcement learning related systems, rather than on typical structures involved in more biologically based fears.