Schedules of reinforcement have different effects on the behavior of children. In the “What?” of Schedules of Reinforcement blog, the definitions of the different types of schedules were discussed. This blog will elaborate on the effects of each of these schedules. These effects, along with some advantages and disadvantages, will be briefly discussed in this blog in order to help you decide which schedule to use.
As discussed in the previous blog on schedules of reinforcement, every instance of a desired behavior is reinforced when continuous schedules of reinforcement are used. There are two main advantages in using these schedules; first, it will help the child associate the desired behavior with receiving reinforcers (Smith, 1995). The child will learn that each time he engages in a specific behavior (i.e. saying please) mom will give him a prize (i.e. high fives). The second advantage of using continuous schedules is that it will increase the rate of desired behaviors quickly. Once the child learns that he can receive high fives after saying “please,” he will begin to engage in that desired behavior more frequently. A disadvantage to continuous schedules, however, is that if you abruptly stop providing reinforcement, the desired behavior will cease quickly. If every instance of the desired behavior was being reinforced, and suddenly no instance of the desired behavior is being reinforced, the desired behavior will quickly stop occurring. In addition, it is important to note that continuous schedules of reinforcement can be difficult to implement because they require you to constantly monitor the child and catch every behavior. Typically, we use continuous schedules of reinforcement to reinforce emerging behavior and fade to intermittent schedules once a behavior is occurring more frequently.
In intermittent schedules of reinforcement, reinforcement is provided after a certain number or duration of responses. There are four types of intermittent schedules; all four types have different effects on behavior.
Fixed Ratio Schedules
When you use fixed ratio schedules, the rate of the behavior will increase before receiving the desired reinforcer, then a short pause of behavior will occur after the reinforcer is received. This cycle will continue when using this type of schedule. This will produce high rates of steady behavior, making it the best schedule to use for teaching new behaviors. A common example of fixed ratios is when people get paid per number of assembled pieces. As the individual gets closer to the target number of pieces he will begin to work faster; once he has received the money for the pieces, he will stop or slow down for a brief moment until he gets closer to the goal, at which point he will begin to speed up again.
Fixed Interval Schedules
Fixed interval schedules also produce cycles of high and low rates of behaviors with a brief pause after receiving the reinforcers. The rate of behaviors, however, is lower than fixed ratios because reinforcement is being delivered after engaging in a specific behavior after a certain period of time. A common example of fixed intervals is a paycheck. People receive their paycheck every week; if they stop receiving their paycheck, it is unlikely that they will continue working.. A disadvantage of fixed interval schedules is that the behavior is likely to become extinguished quickly if reinforcement stops.
In variable ratio schedules, the individual does not know how many responses he needs to engage in before receiving reinforcement; therefore, he will continue to engage in the target behavior, which creates highly stable rates and makes the behavior highly resistant to extinction. This makes this schedule the best for maintaining newly acquired behaviors. A popular example of a variable ratio schedule is the slot machine. A person may win a prize after the 3rd, 8th, 15th or 20th time they press the button. The person will continuously press the button until he wins the next prize because he does not know exactly when it is coming!
Just like variable ratios, variable interval schedules also produce steady rates of behaviors because the individual does not know how much time will pass until reinforcement is delivered. This creates high resistance to extinction. The difference between variable ratio and the variable interval schedule is that the rates of behaviors are low because it is based on the passage of a specific time period, and not the number of responses. An example of a variable interval schedule is when your supervisor checks in on you without letting you know. The supervisor might observe every 30 minutes, 45 minutes or an hour; however the workers do not know a specific time between checks, so their appropriate working behaviors will continue at a steady rate because they know at some point their supervisor will drop in to give them praise.
This is a brief review of the effects of schedules of reinforcement on behavior and some advantages and disadvantages. In the next blog we will talk about the steps to follow when preparing to implement schedules of reinforcement.