0% found this document useful (0 votes)
4 views

Learning- Part 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Learning- Part 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

OPERANT CONDITIONING

History of Operant Conditioning Experiment

Founding by B.F. Skinner: B.F. Skinner, a prominent American psychologist, developed


the concept of operant conditioning in the 1930s. Building on the ideas of Edward
Thorndike's Law of Effect (which states that behaviors followed by favorable outcomes
are more likely to recur), Skinner focused on understanding how consequences shape
behavior.

Skinner Box Experiment: Skinner created an experimental apparatus called the Skinner
Box (or operant conditioning chamber) to study animal behavior, especially in rats and
pigeons. In the box, animals could press a lever (for rats) or peck a disk (for pigeons) to
receive a reward (typically food). Skinner found that animals would repeat behaviors
that produced favorable outcomes, demonstrating the role of reinforcement in
learning.
Process of Operant Conditioning

Operant conditioning relies on the association between a behavior and its consequence. The
process includes:

Antecedent: The environment or condition that triggers the behavior.


Behavior: The response or action taken by the subject.
Consequence: The result of the behavior, which can either reinforce (encourage) or punish
(discourage) the behavior.

Through repeated interactions, the subject learns to associate specific behaviors with outcomes,
leading to an increase or decrease in the frequency of the behavior.

Extinction: The gradual weakening of a behavior when it is no longer reinforced. For example, a
rat may stop pressing a lever if pressing no longer produces food.
Shaping: Gradually guiding behavior toward a desired goal by reinforcing successive
approximations (small steps) of the behavior.
REINFORCEMENT VS PUNISHMENT

Reinforcement: Any consequence that strengthens or increases the likelihood of a


behavior.

Positive Reinforcement: Adding a pleasant stimulus (e.g., giving a reward for good
behavior).
Negative Reinforcement: Removing an aversive stimulus to increase behavior (e.g.,
turning off a loud alarm when the desired action is completed).

Punishment: Any consequence that weakens or decreases the likelihood of a behavior.

Positive Punishment: Adding an unpleasant stimulus to decrease behavior (e.g.,


scolding).
Negative Punishment: Removing a pleasant stimulus to decrease behavior (e.g., taking
away privileges).
Continuous Reinforcement: Reinforcing the desired behavior every time it occurs.
Effect: Rapid learning but also rapid extinction once reinforcement stops.

Partial (Intermittent) Reinforcement: Reinforcing behavior only some of the time.


Effect: Slower acquisition of behavior, but greater resistance to extinction.
SCHEDULES OF REINFORCEMENT

Fixed-Ratio Schedule: Reinforcement is provided after a specific number of responses (e.g., every 5th
response).
Effect: High rate of response with a short pause after each reinforcement.

Variable-Ratio Schedule: Reinforcement is given after an unpredictable number of responses (e.g.,


slot machines).
Effect: High, steady rate of response and strong resistance to extinction.

Fixed-Interval Schedule: Reinforcement is given for the first response after a fixed period (e.g., every
5 minutes).
Effect: Response rate increases as the reinforcement time approaches, then drops after
reinforcement.

Variable-Interval Schedule: Reinforcement is provided for the first response after varying time
intervals (e.g., checking emails randomly).
Effect: Steady and moderate rate of response with strong resistance to extinction.

You might also like